=== RUN TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:30:47.954311 478387 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/kindnet-662609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m33.904231619s)
-- stdout --
* [no-preload-215237] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20318
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "no-preload-215237" primary control-plane node in "no-preload-215237" cluster
* Restarting existing kvm2 VM for "no-preload-215237" ...
* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-215237 addons enable metrics-server
* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
-- /stdout --
** stderr **
I0127 12:30:40.727312 532344 out.go:345] Setting OutFile to fd 1 ...
I0127 12:30:40.727428 532344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:30:40.727437 532344 out.go:358] Setting ErrFile to fd 2...
I0127 12:30:40.727443 532344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:30:40.727651 532344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 12:30:40.728186 532344 out.go:352] Setting JSON to false
I0127 12:30:40.729253 532344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11584,"bootTime":1737969457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 12:30:40.729347 532344 start.go:139] virtualization: kvm guest
I0127 12:30:40.731301 532344 out.go:177] * [no-preload-215237] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 12:30:40.732412 532344 out.go:177] - MINIKUBE_LOCATION=20318
I0127 12:30:40.732410 532344 notify.go:220] Checking for updates...
I0127 12:30:40.733506 532344 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 12:30:40.734483 532344 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:30:40.735546 532344 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
I0127 12:30:40.736524 532344 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 12:30:40.737455 532344 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 12:30:40.738819 532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:30:40.739241 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:30:40.739308 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:30:40.754514 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
I0127 12:30:40.755024 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:30:40.755618 532344 main.go:141] libmachine: Using API Version 1
I0127 12:30:40.755681 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:30:40.756076 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:30:40.756268 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:30:40.756497 532344 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 12:30:40.756868 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:30:40.756919 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:30:40.771021 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
I0127 12:30:40.771473 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:30:40.771933 532344 main.go:141] libmachine: Using API Version 1
I0127 12:30:40.771952 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:30:40.772224 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:30:40.772442 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:30:40.806602 532344 out.go:177] * Using the kvm2 driver based on existing profile
I0127 12:30:40.807876 532344 start.go:297] selected driver: kvm2
I0127 12:30:40.807894 532344 start.go:901] validating driver "kvm2" against &{Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:30:40.807993 532344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 12:30:40.808648 532344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.808721 532344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 12:30:40.822917 532344 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 12:30:40.823297 532344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 12:30:40.823329 532344 cni.go:84] Creating CNI manager for ""
I0127 12:30:40.823374 532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:30:40.823421 532344 start.go:340] cluster config:
{Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:30:40.823511 532344 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.825138 532344 out.go:177] * Starting "no-preload-215237" primary control-plane node in "no-preload-215237" cluster
I0127 12:30:40.826418 532344 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:30:40.826528 532344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/config.json ...
I0127 12:30:40.826670 532344 cache.go:107] acquiring lock: {Name:mk55e556137b0c44eecbcafd8f1ad8d6d2235baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826682 532344 cache.go:107] acquiring lock: {Name:mk821e1f96179d7c8829160b4eec213e789ee3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826691 532344 cache.go:107] acquiring lock: {Name:mk929031bf1a952c5b2751146f50732f4326ebe7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826723 532344 start.go:360] acquireMachinesLock for no-preload-215237: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 12:30:40.826745 532344 cache.go:107] acquiring lock: {Name:mkf7c3fecb361dc165769bdeefaf93c09aa4c1a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826753 532344 cache.go:107] acquiring lock: {Name:mka663f6d0ea2d905d4b82f301a92ab6cde3c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826767 532344 start.go:364] duration metric: took 25.335µs to acquireMachinesLock for "no-preload-215237"
I0127 12:30:40.826711 532344 cache.go:107] acquiring lock: {Name:mk837708656e0fcd1bce12e43d0e6bbb5fd34cfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826775 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
I0127 12:30:40.826783 532344 start.go:96] Skipping create...Using existing machine configuration
I0127 12:30:40.826790 532344 fix.go:54] fixHost starting:
I0127 12:30:40.826790 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0127 12:30:40.826791 532344 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 122.972µs
I0127 12:30:40.826802 532344 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
I0127 12:30:40.826778 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
I0127 12:30:40.826816 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
I0127 12:30:40.826816 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
I0127 12:30:40.826817 532344 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 140.599µs
I0127 12:30:40.826830 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0127 12:30:40.826828 532344 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 127.926µs
I0127 12:30:40.826838 532344 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 88.859µs
I0127 12:30:40.826877 532344 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0127 12:30:40.826832 532344 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
I0127 12:30:40.826841 532344 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0127 12:30:40.826803 532344 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.867µs
I0127 12:30:40.826897 532344 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0127 12:30:40.826830 532344 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 86.77µs
I0127 12:30:40.826905 532344 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
I0127 12:30:40.826783 532344 cache.go:107] acquiring lock: {Name:mke910280a5e5f0cfff4ec3463b563cf11210087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826830 532344 cache.go:107] acquiring lock: {Name:mkd2a6bebb2f88e8eab599e070725a391f31a539 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:30:40.826938 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
I0127 12:30:40.826950 532344 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 202.041µs
I0127 12:30:40.826959 532344 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
I0127 12:30:40.826971 532344 cache.go:115] /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
I0127 12:30:40.826980 532344 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 201.579µs
I0127 12:30:40.826992 532344 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-471120/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
I0127 12:30:40.827004 532344 cache.go:87] Successfully saved all images to host disk.
I0127 12:30:40.827136 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:30:40.827181 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:30:40.840594 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
I0127 12:30:40.841066 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:30:40.841617 532344 main.go:141] libmachine: Using API Version 1
I0127 12:30:40.841637 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:30:40.841959 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:30:40.842165 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:30:40.842301 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:30:40.843824 532344 fix.go:112] recreateIfNeeded on no-preload-215237: state=Stopped err=<nil>
I0127 12:30:40.843852 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
W0127 12:30:40.843991 532344 fix.go:138] unexpected machine state, will restart: <nil>
I0127 12:30:40.845738 532344 out.go:177] * Restarting existing kvm2 VM for "no-preload-215237" ...
I0127 12:30:40.846939 532344 main.go:141] libmachine: (no-preload-215237) Calling .Start
I0127 12:30:40.848043 532344 main.go:141] libmachine: (no-preload-215237) starting domain...
I0127 12:30:40.848079 532344 main.go:141] libmachine: (no-preload-215237) ensuring networks are active...
I0127 12:30:40.848690 532344 main.go:141] libmachine: (no-preload-215237) Ensuring network default is active
I0127 12:30:40.849048 532344 main.go:141] libmachine: (no-preload-215237) Ensuring network mk-no-preload-215237 is active
I0127 12:30:40.849478 532344 main.go:141] libmachine: (no-preload-215237) getting domain XML...
I0127 12:30:40.850299 532344 main.go:141] libmachine: (no-preload-215237) creating domain...
I0127 12:30:42.033031 532344 main.go:141] libmachine: (no-preload-215237) waiting for IP...
I0127 12:30:42.033824 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:42.034251 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:42.034346 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.034240 532380 retry.go:31] will retry after 216.227621ms: waiting for domain to come up
I0127 12:30:42.251883 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:42.252518 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:42.252551 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.252472 532380 retry.go:31] will retry after 259.03318ms: waiting for domain to come up
I0127 12:30:42.513108 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:42.513658 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:42.513690 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.513561 532380 retry.go:31] will retry after 328.428662ms: waiting for domain to come up
I0127 12:30:42.844239 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:42.844721 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:42.844756 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:42.844680 532380 retry.go:31] will retry after 527.092813ms: waiting for domain to come up
I0127 12:30:43.373357 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:43.373864 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:43.373886 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:43.373823 532380 retry.go:31] will retry after 704.763548ms: waiting for domain to come up
I0127 12:30:44.079794 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:44.080321 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:44.080357 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:44.080285 532380 retry.go:31] will retry after 929.711084ms: waiting for domain to come up
I0127 12:30:45.011401 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:45.011920 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:45.011953 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:45.011876 532380 retry.go:31] will retry after 1.164341882s: waiting for domain to come up
I0127 12:30:46.177513 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:46.178005 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:46.178033 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:46.177963 532380 retry.go:31] will retry after 1.423725356s: waiting for domain to come up
I0127 12:30:47.602746 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:47.603179 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:47.603205 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:47.603155 532380 retry.go:31] will retry after 1.393685643s: waiting for domain to come up
I0127 12:30:48.998707 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:48.999209 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:48.999248 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:48.999158 532380 retry.go:31] will retry after 1.514373112s: waiting for domain to come up
I0127 12:30:50.516002 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:50.516491 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:50.516528 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:50.516429 532380 retry.go:31] will retry after 2.407396715s: waiting for domain to come up
I0127 12:30:52.926548 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:52.927029 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:52.927060 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:52.926981 532380 retry.go:31] will retry after 2.617026411s: waiting for domain to come up
I0127 12:30:55.546865 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:55.547487 532344 main.go:141] libmachine: (no-preload-215237) DBG | unable to find current IP address of domain no-preload-215237 in network mk-no-preload-215237
I0127 12:30:55.547512 532344 main.go:141] libmachine: (no-preload-215237) DBG | I0127 12:30:55.547433 532380 retry.go:31] will retry after 3.886989093s: waiting for domain to come up
I0127 12:30:59.438919 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.439387 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has current primary IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.439416 532344 main.go:141] libmachine: (no-preload-215237) found domain IP: 192.168.72.127
I0127 12:30:59.439429 532344 main.go:141] libmachine: (no-preload-215237) reserving static IP address...
I0127 12:30:59.439874 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "no-preload-215237", mac: "52:54:00:f8:56:01", ip: "192.168.72.127"} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.439903 532344 main.go:141] libmachine: (no-preload-215237) DBG | skip adding static IP to network mk-no-preload-215237 - found existing host DHCP lease matching {name: "no-preload-215237", mac: "52:54:00:f8:56:01", ip: "192.168.72.127"}
I0127 12:30:59.439918 532344 main.go:141] libmachine: (no-preload-215237) reserved static IP address 192.168.72.127 for domain no-preload-215237
I0127 12:30:59.439933 532344 main.go:141] libmachine: (no-preload-215237) waiting for SSH...
I0127 12:30:59.439945 532344 main.go:141] libmachine: (no-preload-215237) DBG | Getting to WaitForSSH function...
I0127 12:30:59.441927 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.442276 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.442301 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.442422 532344 main.go:141] libmachine: (no-preload-215237) DBG | Using SSH client type: external
I0127 12:30:59.442438 532344 main.go:141] libmachine: (no-preload-215237) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa (-rw-------)
I0127 12:30:59.442510 532344 main.go:141] libmachine: (no-preload-215237) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 12:30:59.442536 532344 main.go:141] libmachine: (no-preload-215237) DBG | About to run SSH command:
I0127 12:30:59.442551 532344 main.go:141] libmachine: (no-preload-215237) DBG | exit 0
I0127 12:30:59.567981 532344 main.go:141] libmachine: (no-preload-215237) DBG | SSH cmd err, output: <nil>:
I0127 12:30:59.568339 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetConfigRaw
I0127 12:30:59.568989 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
I0127 12:30:59.571592 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.571959 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.571989 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.572273 532344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/config.json ...
I0127 12:30:59.572469 532344 machine.go:93] provisionDockerMachine start ...
I0127 12:30:59.572497 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:30:59.572706 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:30:59.574838 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.575239 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.575263 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.575397 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:30:59.575571 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.575727 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.575896 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:30:59.576055 532344 main.go:141] libmachine: Using SSH client type: native
I0127 12:30:59.576315 532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.127 22 <nil> <nil>}
I0127 12:30:59.576332 532344 main.go:141] libmachine: About to run SSH command:
hostname
I0127 12:30:59.684121 532344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 12:30:59.684143 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
I0127 12:30:59.684363 532344 buildroot.go:166] provisioning hostname "no-preload-215237"
I0127 12:30:59.684395 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
I0127 12:30:59.684563 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:30:59.687017 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.687498 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.687519 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.687688 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:30:59.687882 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.688033 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.688149 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:30:59.688400 532344 main.go:141] libmachine: Using SSH client type: native
I0127 12:30:59.688606 532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.127 22 <nil> <nil>}
I0127 12:30:59.688620 532344 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-215237 && echo "no-preload-215237" | sudo tee /etc/hostname
I0127 12:30:59.809126 532344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-215237
I0127 12:30:59.809160 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:30:59.811730 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.812034 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.812065 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.812279 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:30:59.812479 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.812666 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:30:59.812823 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:30:59.812975 532344 main.go:141] libmachine: Using SSH client type: native
I0127 12:30:59.813154 532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.127 22 <nil> <nil>}
I0127 12:30:59.813177 532344 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-215237' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-215237/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-215237' | sudo tee -a /etc/hosts;
fi
fi
I0127 12:30:59.928174 532344 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 12:30:59.928216 532344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
I0127 12:30:59.928250 532344 buildroot.go:174] setting up certificates
I0127 12:30:59.928266 532344 provision.go:84] configureAuth start
I0127 12:30:59.928289 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetMachineName
I0127 12:30:59.928558 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
I0127 12:30:59.931047 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.931432 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.931458 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.931628 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:30:59.933683 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.934054 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:30:59.934084 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:30:59.934225 532344 provision.go:143] copyHostCerts
I0127 12:30:59.934287 532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
I0127 12:30:59.934312 532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
I0127 12:30:59.934391 532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
I0127 12:30:59.934498 532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
I0127 12:30:59.934509 532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
I0127 12:30:59.934546 532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
I0127 12:30:59.934622 532344 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
I0127 12:30:59.934632 532344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
I0127 12:30:59.934665 532344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
I0127 12:30:59.934735 532344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.no-preload-215237 san=[127.0.0.1 192.168.72.127 localhost minikube no-preload-215237]
I0127 12:31:00.052134 532344 provision.go:177] copyRemoteCerts
I0127 12:31:00.052197 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 12:31:00.052224 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:31:00.054597 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.054994 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.055028 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.055188 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:31:00.055385 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.055557 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:31:00.055685 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:31:00.138123 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 12:31:00.159235 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 12:31:00.179466 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 12:31:00.201071 532344 provision.go:87] duration metric: took 272.788555ms to configureAuth
I0127 12:31:00.201093 532344 buildroot.go:189] setting minikube options for container-runtime
I0127 12:31:00.201247 532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:31:00.201256 532344 machine.go:96] duration metric: took 628.773488ms to provisionDockerMachine
I0127 12:31:00.201264 532344 start.go:293] postStartSetup for "no-preload-215237" (driver="kvm2")
I0127 12:31:00.201274 532344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 12:31:00.201301 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:31:00.201610 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 12:31:00.201640 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:31:00.204042 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.204384 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.204411 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.204567 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:31:00.204782 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.204951 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:31:00.205111 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:31:00.290264 532344 ssh_runner.go:195] Run: cat /etc/os-release
I0127 12:31:00.294177 532344 info.go:137] Remote host: Buildroot 2023.02.9
I0127 12:31:00.294205 532344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
I0127 12:31:00.294280 532344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
I0127 12:31:00.294371 532344 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
I0127 12:31:00.294486 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 12:31:00.303136 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
I0127 12:31:00.323875 532344 start.go:296] duration metric: took 122.599026ms for postStartSetup
I0127 12:31:00.323915 532344 fix.go:56] duration metric: took 19.497125621s for fixHost
I0127 12:31:00.323936 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:31:00.326361 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.326682 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.326707 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.326913 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:31:00.327092 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.327242 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.327360 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:31:00.327496 532344 main.go:141] libmachine: Using SSH client type: native
I0127 12:31:00.327673 532344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.127 22 <nil> <nil>}
I0127 12:31:00.327684 532344 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 12:31:00.436970 532344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981060.412662770
I0127 12:31:00.436997 532344 fix.go:216] guest clock: 1737981060.412662770
I0127 12:31:00.437004 532344 fix.go:229] Guest: 2025-01-27 12:31:00.41266277 +0000 UTC Remote: 2025-01-27 12:31:00.323919122 +0000 UTC m=+19.633267258 (delta=88.743648ms)
I0127 12:31:00.437024 532344 fix.go:200] guest clock delta is within tolerance: 88.743648ms
I0127 12:31:00.437028 532344 start.go:83] releasing machines lock for "no-preload-215237", held for 19.610253908s
I0127 12:31:00.437048 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:31:00.437336 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
I0127 12:31:00.440013 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.440380 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.440416 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.440580 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:31:00.441102 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:31:00.441284 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:31:00.441380 532344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 12:31:00.441431 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:31:00.441489 532344 ssh_runner.go:195] Run: cat /version.json
I0127 12:31:00.441522 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:31:00.443822 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.443874 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.444218 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.444251 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:00.444272 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.444340 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:00.444466 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:31:00.444612 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:31:00.444687 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.444783 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:31:00.444839 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:31:00.444925 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:31:00.444987 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:31:00.445082 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:31:00.534498 532344 ssh_runner.go:195] Run: systemctl --version
I0127 12:31:00.564683 532344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 12:31:00.569691 532344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 12:31:00.569752 532344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 12:31:00.583888 532344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 12:31:00.583909 532344 start.go:495] detecting cgroup driver to use...
I0127 12:31:00.583974 532344 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 12:31:00.613953 532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 12:31:00.625976 532344 docker.go:217] disabling cri-docker service (if available) ...
I0127 12:31:00.626021 532344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 12:31:00.638192 532344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 12:31:00.650175 532344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 12:31:00.764972 532344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 12:31:00.889875 532344 docker.go:233] disabling docker service ...
I0127 12:31:00.889955 532344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 12:31:00.903369 532344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 12:31:00.914933 532344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 12:31:01.045889 532344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 12:31:01.175748 532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 12:31:01.187756 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:31:01.205407 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 12:31:01.214753 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 12:31:01.223968 532344 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 12:31:01.224018 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 12:31:01.233281 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:31:01.242430 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 12:31:01.251772 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:31:01.260995 532344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 12:31:01.270440 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 12:31:01.279739 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 12:31:01.288816 532344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 12:31:01.298104 532344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 12:31:01.306211 532344 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 12:31:01.306255 532344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 12:31:01.318407 532344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 12:31:01.326978 532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:31:01.446085 532344 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 12:31:01.472453 532344 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 12:31:01.472530 532344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 12:31:01.477101 532344 retry.go:31] will retry after 1.31059768s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 12:31:02.788604 532344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 12:31:02.793855 532344 start.go:563] Will wait 60s for crictl version
I0127 12:31:02.793909 532344 ssh_runner.go:195] Run: which crictl
I0127 12:31:02.797452 532344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 12:31:02.841844 532344 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 12:31:02.841918 532344 ssh_runner.go:195] Run: containerd --version
I0127 12:31:02.868423 532344 ssh_runner.go:195] Run: containerd --version
I0127 12:31:02.892306 532344 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 12:31:02.893458 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetIP
I0127 12:31:02.896603 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:02.897044 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:31:02.897077 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:31:02.897311 532344 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0127 12:31:02.901184 532344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:31:02.913317 532344 kubeadm.go:883] updating cluster {Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 12:31:02.913471 532344 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:31:02.913539 532344 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:31:02.943808 532344 containerd.go:627] all images are preloaded for containerd runtime.
I0127 12:31:02.943828 532344 cache_images.go:84] Images are preloaded, skipping loading
I0127 12:31:02.943837 532344 kubeadm.go:934] updating node { 192.168.72.127 8443 v1.32.1 containerd true true} ...
I0127 12:31:02.943928 532344 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-215237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.127
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 12:31:02.943982 532344 ssh_runner.go:195] Run: sudo crictl info
I0127 12:31:02.974803 532344 cni.go:84] Creating CNI manager for ""
I0127 12:31:02.974824 532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:31:02.974834 532344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 12:31:02.974857 532344 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.127 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-215237 NodeName:no-preload-215237 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 12:31:02.974956 532344 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.127
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-215237"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.127"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.127"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 12:31:02.975009 532344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 12:31:02.984012 532344 binaries.go:44] Found k8s binaries, skipping transfer
I0127 12:31:02.984070 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 12:31:02.992339 532344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0127 12:31:03.007404 532344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 12:31:03.022118 532344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
I0127 12:31:03.036811 532344 ssh_runner.go:195] Run: grep 192.168.72.127 control-plane.minikube.internal$ /etc/hosts
I0127 12:31:03.040003 532344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.127 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:31:03.051232 532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:31:03.172247 532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:31:03.192551 532344 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237 for IP: 192.168.72.127
I0127 12:31:03.192572 532344 certs.go:194] generating shared ca certs ...
I0127 12:31:03.192588 532344 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:31:03.192793 532344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
I0127 12:31:03.192854 532344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
I0127 12:31:03.192868 532344 certs.go:256] generating profile certs ...
I0127 12:31:03.192984 532344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/client.key
I0127 12:31:03.193064 532344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.key.8184fc12
I0127 12:31:03.193114 532344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.key
I0127 12:31:03.193270 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
W0127 12:31:03.193309 532344 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
I0127 12:31:03.193323 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
I0127 12:31:03.193356 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
I0127 12:31:03.193385 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
I0127 12:31:03.193417 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
I0127 12:31:03.193467 532344 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
I0127 12:31:03.194073 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 12:31:03.227604 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 12:31:03.254585 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 12:31:03.283266 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 12:31:03.319723 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 12:31:03.363597 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 12:31:03.396059 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 12:31:03.418199 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/no-preload-215237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0127 12:31:03.442707 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
I0127 12:31:03.464702 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
I0127 12:31:03.486822 532344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 12:31:03.508475 532344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 12:31:03.523647 532344 ssh_runner.go:195] Run: openssl version
I0127 12:31:03.528893 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 12:31:03.538561 532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 12:31:03.542628 532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
I0127 12:31:03.542669 532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 12:31:03.547997 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 12:31:03.557978 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
I0127 12:31:03.573483 532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
I0127 12:31:03.579430 532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
I0127 12:31:03.579469 532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
I0127 12:31:03.588347 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
I0127 12:31:03.600641 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
I0127 12:31:03.611232 532344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
I0127 12:31:03.615436 532344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
I0127 12:31:03.615490 532344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
I0127 12:31:03.621133 532344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
I0127 12:31:03.631880 532344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 12:31:03.636131 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 12:31:03.641537 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 12:31:03.646536 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 12:31:03.651569 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 12:31:03.656580 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 12:31:03.661815 532344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 12:31:03.666949 532344 kubeadm.go:392] StartCluster: {Name:no-preload-215237 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-215237 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:31:03.667067 532344 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 12:31:03.667112 532344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 12:31:03.708632 532344 cri.go:89] found id: "505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3"
I0127 12:31:03.708661 532344 cri.go:89] found id: "67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9"
I0127 12:31:03.708673 532344 cri.go:89] found id: "869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18"
I0127 12:31:03.708679 532344 cri.go:89] found id: "f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130"
I0127 12:31:03.708683 532344 cri.go:89] found id: "3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383"
I0127 12:31:03.708688 532344 cri.go:89] found id: "f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0"
I0127 12:31:03.708692 532344 cri.go:89] found id: "ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7"
I0127 12:31:03.708696 532344 cri.go:89] found id: ""
I0127 12:31:03.708768 532344 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 12:31:03.723216 532344 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T12:31:03Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 12:31:03.723286 532344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 12:31:03.732749 532344 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 12:31:03.732773 532344 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 12:31:03.732834 532344 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 12:31:03.742030 532344 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 12:31:03.742751 532344 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-215237" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:31:03.743297 532344 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-215237" cluster setting kubeconfig missing "no-preload-215237" context setting]
I0127 12:31:03.743962 532344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:31:03.745759 532344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 12:31:03.754320 532344 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.127
I0127 12:31:03.754348 532344 kubeadm.go:1160] stopping kube-system containers ...
I0127 12:31:03.754360 532344 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 12:31:03.754410 532344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 12:31:03.796303 532344 cri.go:89] found id: "505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3"
I0127 12:31:03.796334 532344 cri.go:89] found id: "67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9"
I0127 12:31:03.796340 532344 cri.go:89] found id: "869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18"
I0127 12:31:03.796345 532344 cri.go:89] found id: "f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130"
I0127 12:31:03.796349 532344 cri.go:89] found id: "3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383"
I0127 12:31:03.796357 532344 cri.go:89] found id: "f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0"
I0127 12:31:03.796361 532344 cri.go:89] found id: "ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7"
I0127 12:31:03.796365 532344 cri.go:89] found id: ""
I0127 12:31:03.796373 532344 cri.go:252] Stopping containers: [505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3 67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9 869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18 f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130 3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383 f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0 ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7]
I0127 12:31:03.796432 532344 ssh_runner.go:195] Run: which crictl
I0127 12:31:03.800254 532344 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 505fbc8803f52aaa2369926620791e9fb0143e36efc8bec95e264ca00ba0f4a3 67c0fdaeaf8f805761aef519e5ce8d7a18b84954fe7f4904252cc174d46814b9 869180061925aafeaed8b69400b166337c3f8922002e5df9120dd5175199cf18 f91fd7915e8528354c9e64a05d2b5a648e7580257c383e77d00fec62ed750130 3364d982e4cb9b77ea1ca4ee8d2c5f727fe3acca7ecc7819307ceb2267df4383 f15b855ef6d97b29741de5c05f40d90165b778820a07496bedddcbfee47d05b0 ace9d66a69af8df3d795c60a20b47c0f64075c460dcdc20966f2f52e635484d7
I0127 12:31:03.832801 532344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 12:31:03.848490 532344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:31:03.858673 532344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:31:03.858693 532344 kubeadm.go:157] found existing configuration files:
I0127 12:31:03.858738 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:31:03.867322 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:31:03.867371 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:31:03.875833 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:31:03.884170 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:31:03.884209 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:31:03.892639 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:31:03.900809 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:31:03.900859 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:31:03.909231 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:31:03.917997 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:31:03.918046 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:31:03.927395 532344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:31:03.937153 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:04.054712 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:04.780572 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:04.989545 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:05.068231 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:05.167638 532344 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:31:05.167744 532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:31:05.667821 532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:31:06.168324 532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:31:06.196097 532344 api_server.go:72] duration metric: took 1.028459805s to wait for apiserver process to appear ...
I0127 12:31:06.196132 532344 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:31:06.196166 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:06.196920 532344 api_server.go:269] stopped: https://192.168.72.127:8443/healthz: Get "https://192.168.72.127:8443/healthz": dial tcp 192.168.72.127:8443: connect: connection refused
I0127 12:31:06.696590 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:08.684891 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 12:31:08.684939 532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 12:31:08.684960 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:08.723267 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 12:31:08.723300 532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 12:31:08.723318 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:08.733845 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 12:31:08.733876 532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 12:31:09.196471 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:09.201015 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 12:31:09.201038 532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 12:31:09.696253 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:09.701316 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 12:31:09.701345 532344 api_server.go:103] status: https://192.168.72.127:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 12:31:10.197092 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:31:10.205140 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
ok
I0127 12:31:10.213238 532344 api_server.go:141] control plane version: v1.32.1
I0127 12:31:10.213264 532344 api_server.go:131] duration metric: took 4.017123672s to wait for apiserver health ...
I0127 12:31:10.213274 532344 cni.go:84] Creating CNI manager for ""
I0127 12:31:10.213280 532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:31:10.214831 532344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:31:10.216111 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:31:10.228338 532344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:31:10.257329 532344 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:31:10.281510 532344 system_pods.go:59] 8 kube-system pods found
I0127 12:31:10.281564 532344 system_pods.go:61] "coredns-668d6bf9bc-zh42j" [dcebb6c7-6360-408e-b1bf-0fa75706d01b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:31:10.281579 532344 system_pods.go:61] "etcd-no-preload-215237" [351bdcb1-e57f-452f-ac15-c919dbd85236] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 12:31:10.281597 532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [31345d0f-59eb-4d21-b652-aa42121f6172] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 12:31:10.281610 532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [afe7df6f-3e38-43b9-92b0-fa0cc894da1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 12:31:10.281620 532344 system_pods.go:61] "kube-proxy-4bwrn" [959b8095-1cf8-4883-97fc-8cee826fe012] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 12:31:10.281631 532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [43bb4154-1617-43a3-b721-9a7eae31bc1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 12:31:10.281645 532344 system_pods.go:61] "metrics-server-f79f97bbb-57422" [a3b4a3bd-65a5-4f98-9143-30f6bae7c691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:31:10.281657 532344 system_pods.go:61] "storage-provisioner" [95a9ba7c-5fe2-4436-95a5-3d7cec947a22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 12:31:10.281665 532344 system_pods.go:74] duration metric: took 24.311549ms to wait for pod list to return data ...
I0127 12:31:10.281680 532344 node_conditions.go:102] verifying NodePressure condition ...
I0127 12:31:10.284847 532344 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 12:31:10.284874 532344 node_conditions.go:123] node cpu capacity is 2
I0127 12:31:10.284889 532344 node_conditions.go:105] duration metric: took 3.200244ms to run NodePressure ...
I0127 12:31:10.284912 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:31:10.548277 532344 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0127 12:31:10.553575 532344 kubeadm.go:739] kubelet initialised
I0127 12:31:10.553596 532344 kubeadm.go:740] duration metric: took 5.291701ms waiting for restarted kubelet to initialise ...
I0127 12:31:10.553606 532344 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:31:10.561135 532344 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace to be "Ready" ...
I0127 12:31:12.578507 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:15.068380 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:17.568005 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:20.073351 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:22.568605 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:23.068873 532344 pod_ready.go:93] pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:23.068898 532344 pod_ready.go:82] duration metric: took 12.507743159s for pod "coredns-668d6bf9bc-zh42j" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.068907 532344 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.073880 532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:23.073904 532344 pod_ready.go:82] duration metric: took 4.987182ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.073916 532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.078751 532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:23.078772 532344 pod_ready.go:82] duration metric: took 4.848497ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.078782 532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.083332 532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:23.083355 532344 pod_ready.go:82] duration metric: took 4.564246ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.083366 532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4bwrn" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.087407 532344 pod_ready.go:93] pod "kube-proxy-4bwrn" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:23.087425 532344 pod_ready.go:82] duration metric: took 4.051963ms for pod "kube-proxy-4bwrn" in "kube-system" namespace to be "Ready" ...
I0127 12:31:23.087435 532344 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:25.093397 532344 pod_ready.go:103] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:26.094833 532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:31:26.094861 532344 pod_ready.go:82] duration metric: took 3.007417278s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:31:26.094875 532344 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" ...
I0127 12:31:28.101352 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:30.601585 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:32.604139 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:35.102293 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:37.600754 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:40.100905 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:42.100991 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:44.101855 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:46.101913 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:48.102821 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:50.103463 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:52.602228 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:55.101407 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:31:57.603161 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:00.101889 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:02.600319 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:04.602188 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:06.602936 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:09.101176 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:11.102753 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:13.602362 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:15.821577 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:18.103079 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:20.601901 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:23.100745 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:25.101367 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:27.601511 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:30.101270 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:32.101710 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:34.101744 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:36.102085 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:38.601443 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:41.101544 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:43.101863 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:45.601264 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:48.100828 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:50.100933 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:52.101287 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:54.101793 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:56.101838 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:32:58.601337 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:01.101215 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:03.601070 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:06.100670 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:08.100799 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:10.100841 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:12.600560 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:14.601258 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:16.601363 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:19.100864 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:21.101528 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:23.101689 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:25.602231 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:28.102076 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:30.601547 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:33.100663 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:35.101055 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:37.601056 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:39.601616 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:41.601758 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:44.102003 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:46.601023 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:49.100200 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:51.100924 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:53.601516 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:55.601588 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:33:57.602867 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:00.101588 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:02.601244 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:04.602040 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:07.100767 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:09.104987 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:11.601623 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:14.100809 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:16.601152 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:19.100846 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:21.101788 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:23.102642 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:25.601649 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:28.100668 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:30.100960 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:32.101019 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:34.604268 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:37.101197 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:39.101280 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:41.102630 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:43.600441 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:45.601204 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:47.602586 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:50.101098 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:52.101401 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:54.601925 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:57.101945 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:34:59.102108 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:01.601264 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:04.101446 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:06.600799 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:09.100973 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:11.102079 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:13.103016 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:15.602006 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:18.102362 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:20.601666 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:22.601820 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:25.106352 532344 pod_ready.go:103] pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:26.095020 532344 pod_ready.go:82] duration metric: took 4m0.000127968s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" ...
E0127 12:35:26.095050 532344 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-57422" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 12:35:26.095079 532344 pod_ready.go:39] duration metric: took 4m15.54146268s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:26.095114 532344 kubeadm.go:597] duration metric: took 4m22.362333931s to restartPrimaryControlPlane
W0127 12:35:26.095189 532344 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 12:35:26.095218 532344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 12:35:27.761272 532344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.666028034s)
I0127 12:35:27.761357 532344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 12:35:27.776204 532344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:35:27.786547 532344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:35:27.796338 532344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:35:27.796364 532344 kubeadm.go:157] found existing configuration files:
I0127 12:35:27.796421 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:35:27.806214 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:35:27.806277 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:35:27.817923 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:35:27.828012 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:35:27.828079 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:35:27.837315 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:35:27.848052 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:35:27.848106 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:35:27.860234 532344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:35:27.872361 532344 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:35:27.872422 532344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:35:27.885106 532344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 12:35:27.934225 532344 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 12:35:27.934331 532344 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 12:35:28.041622 532344 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 12:35:28.041807 532344 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 12:35:28.041967 532344 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 12:35:28.048826 532344 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 12:35:28.051333 532344 out.go:235] - Generating certificates and keys ...
I0127 12:35:28.051432 532344 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 12:35:28.051514 532344 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 12:35:28.051625 532344 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 12:35:28.051703 532344 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 12:35:28.051797 532344 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 12:35:28.051868 532344 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 12:35:28.051950 532344 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 12:35:28.052033 532344 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 12:35:28.052143 532344 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 12:35:28.052246 532344 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 12:35:28.052297 532344 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 12:35:28.052371 532344 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 12:35:28.501590 532344 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 12:35:28.683534 532344 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 12:35:28.769933 532344 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 12:35:28.921369 532344 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 12:35:28.988234 532344 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 12:35:28.988795 532344 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 12:35:28.992437 532344 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 12:35:28.993990 532344 out.go:235] - Booting up control plane ...
I0127 12:35:28.994125 532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 12:35:28.994275 532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 12:35:28.994434 532344 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 12:35:29.013469 532344 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 12:35:29.020349 532344 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 12:35:29.020452 532344 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 12:35:29.162116 532344 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 12:35:29.162239 532344 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 12:35:30.161829 532344 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001098337s
I0127 12:35:30.161949 532344 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 12:35:34.663734 532344 kubeadm.go:310] [api-check] The API server is healthy after 4.502057638s
I0127 12:35:34.684263 532344 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 12:35:34.700836 532344 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 12:35:34.730827 532344 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 12:35:34.731121 532344 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-215237 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 12:35:34.741724 532344 kubeadm.go:310] [bootstrap-token] Using token: tfwuw1.vs4tk3z0lrym6pr2
I0127 12:35:34.742999 532344 out.go:235] - Configuring RBAC rules ...
I0127 12:35:34.743147 532344 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 12:35:34.749364 532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 12:35:34.759443 532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 12:35:34.764392 532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 12:35:34.768628 532344 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 12:35:34.772602 532344 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 12:35:35.071966 532344 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 12:35:35.500583 532344 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 12:35:36.073445 532344 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 12:35:36.075332 532344 kubeadm.go:310]
I0127 12:35:36.075428 532344 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 12:35:36.075445 532344 kubeadm.go:310]
I0127 12:35:36.075540 532344 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 12:35:36.075550 532344 kubeadm.go:310]
I0127 12:35:36.075586 532344 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 12:35:36.075671 532344 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 12:35:36.075755 532344 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 12:35:36.075769 532344 kubeadm.go:310]
I0127 12:35:36.075846 532344 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 12:35:36.075860 532344 kubeadm.go:310]
I0127 12:35:36.075922 532344 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 12:35:36.075935 532344 kubeadm.go:310]
I0127 12:35:36.076003 532344 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 12:35:36.076102 532344 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 12:35:36.076224 532344 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 12:35:36.076304 532344 kubeadm.go:310]
I0127 12:35:36.076429 532344 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 12:35:36.076586 532344 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 12:35:36.076613 532344 kubeadm.go:310]
I0127 12:35:36.076710 532344 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tfwuw1.vs4tk3z0lrym6pr2 \
I0127 12:35:36.076899 532344 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
I0127 12:35:36.076933 532344 kubeadm.go:310] --control-plane
I0127 12:35:36.076940 532344 kubeadm.go:310]
I0127 12:35:36.077034 532344 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 12:35:36.077045 532344 kubeadm.go:310]
I0127 12:35:36.077154 532344 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tfwuw1.vs4tk3z0lrym6pr2 \
I0127 12:35:36.077287 532344 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337
I0127 12:35:36.078154 532344 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 12:35:36.078355 532344 cni.go:84] Creating CNI manager for ""
I0127 12:35:36.078379 532344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:35:36.080448 532344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:35:36.081599 532344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:35:36.097221 532344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:35:36.116819 532344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:35:36.116867 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:36.116885 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-215237 minikube.k8s.io/updated_at=2025_01_27T12_35_36_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=no-preload-215237 minikube.k8s.io/primary=true
I0127 12:35:36.411048 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:36.411073 532344 ops.go:34] apiserver oom_adj: -16
I0127 12:35:36.911315 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:37.411248 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:37.911876 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:38.411669 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:38.912069 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:39.412135 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:39.911694 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:40.411784 532344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:40.580164 532344 kubeadm.go:1113] duration metric: took 4.463356481s to wait for elevateKubeSystemPrivileges
I0127 12:35:40.580215 532344 kubeadm.go:394] duration metric: took 4m36.913272534s to StartCluster
I0127 12:35:40.580240 532344 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:35:40.580344 532344 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:35:40.581635 532344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:35:40.581867 532344 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 12:35:40.581994 532344 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:35:40.582133 532344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215237"
I0127 12:35:40.582159 532344 addons.go:238] Setting addon storage-provisioner=true in "no-preload-215237"
I0127 12:35:40.582165 532344 config.go:182] Loaded profile config "no-preload-215237": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:35:40.582184 532344 addons.go:69] Setting metrics-server=true in profile "no-preload-215237"
I0127 12:35:40.582195 532344 addons.go:69] Setting default-storageclass=true in profile "no-preload-215237"
W0127 12:35:40.582174 532344 addons.go:247] addon storage-provisioner should already be in state true
I0127 12:35:40.582207 532344 addons.go:69] Setting dashboard=true in profile "no-preload-215237"
I0127 12:35:40.582230 532344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215237"
I0127 12:35:40.582239 532344 addons.go:238] Setting addon metrics-server=true in "no-preload-215237"
W0127 12:35:40.582256 532344 addons.go:247] addon metrics-server should already be in state true
I0127 12:35:40.582272 532344 host.go:66] Checking if "no-preload-215237" exists ...
I0127 12:35:40.582295 532344 host.go:66] Checking if "no-preload-215237" exists ...
I0127 12:35:40.582243 532344 addons.go:238] Setting addon dashboard=true in "no-preload-215237"
W0127 12:35:40.582332 532344 addons.go:247] addon dashboard should already be in state true
I0127 12:35:40.582361 532344 host.go:66] Checking if "no-preload-215237" exists ...
I0127 12:35:40.582677 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.582680 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.582718 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.582751 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.582795 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.582837 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.583090 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.583137 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.583649 532344 out.go:177] * Verifying Kubernetes components...
I0127 12:35:40.584826 532344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:35:40.600033 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
I0127 12:35:40.600495 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.601101 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.601140 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.601548 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.601781 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:35:40.602959 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
I0127 12:35:40.603116 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
I0127 12:35:40.603517 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.603557 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.603576 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
I0127 12:35:40.604106 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.604110 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.604131 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.604166 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.604237 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.604574 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.604574 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.604748 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.604773 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.605148 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.605190 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.605298 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.605350 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.605426 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.605560 532344 addons.go:238] Setting addon default-storageclass=true in "no-preload-215237"
W0127 12:35:40.605581 532344 addons.go:247] addon default-storageclass should already be in state true
I0127 12:35:40.605610 532344 host.go:66] Checking if "no-preload-215237" exists ...
I0127 12:35:40.605961 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.606003 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.606008 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.606124 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.622385 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
I0127 12:35:40.622402 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
I0127 12:35:40.622785 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.622902 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.623405 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.623425 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.623426 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.623444 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.623807 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.624012 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:35:40.624084 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.624295 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:35:40.625020 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
I0127 12:35:40.625761 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.626202 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:35:40.626233 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
I0127 12:35:40.626815 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:35:40.627424 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.627447 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.627766 532344 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 12:35:40.627810 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.628024 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.628336 532344 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 12:35:40.628494 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.628762 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.628625 532344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:40.628856 532344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:40.629181 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.629838 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 12:35:40.629857 532344 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 12:35:40.629878 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:35:40.630595 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:35:40.630632 532344 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 12:35:40.631966 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 12:35:40.631995 532344 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 12:35:40.632018 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:35:40.633361 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:35:40.633759 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.634423 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:35:40.634453 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.634769 532344 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:35:40.634919 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:35:40.635154 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:35:40.635360 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:35:40.635498 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:35:40.636051 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.636213 532344 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:35:40.636227 532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:35:40.636243 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:35:40.636517 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:35:40.636548 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.636753 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:35:40.637014 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:35:40.637233 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:35:40.637418 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:35:40.639612 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.640039 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:35:40.640087 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.640197 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:35:40.640387 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:35:40.640530 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:35:40.640693 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:35:40.647815 532344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
I0127 12:35:40.648197 532344 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:40.648682 532344 main.go:141] libmachine: Using API Version 1
I0127 12:35:40.648709 532344 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:40.649176 532344 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:40.649396 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetState
I0127 12:35:40.651079 532344 main.go:141] libmachine: (no-preload-215237) Calling .DriverName
I0127 12:35:40.651315 532344 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:35:40.651335 532344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:35:40.651361 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHHostname
I0127 12:35:40.654639 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.655085 532344 main.go:141] libmachine: (no-preload-215237) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:56:01", ip: ""} in network mk-no-preload-215237: {Iface:virbr4 ExpiryTime:2025-01-27 13:30:51 +0000 UTC Type:0 Mac:52:54:00:f8:56:01 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:no-preload-215237 Clientid:01:52:54:00:f8:56:01}
I0127 12:35:40.655104 532344 main.go:141] libmachine: (no-preload-215237) DBG | domain no-preload-215237 has defined IP address 192.168.72.127 and MAC address 52:54:00:f8:56:01 in network mk-no-preload-215237
I0127 12:35:40.655257 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHPort
I0127 12:35:40.655465 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHKeyPath
I0127 12:35:40.655631 532344 main.go:141] libmachine: (no-preload-215237) Calling .GetSSHUsername
I0127 12:35:40.655792 532344 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/no-preload-215237/id_rsa Username:docker}
I0127 12:35:40.799070 532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:35:40.816802 532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
I0127 12:35:40.842677 532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
I0127 12:35:40.842703 532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
I0127 12:35:40.842716 532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:40.853263 532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
I0127 12:35:40.876376 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 12:35:40.876407 532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 12:35:40.898870 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:35:40.903314 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:35:40.916620 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 12:35:40.916649 532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 12:35:41.067992 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 12:35:41.068023 532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 12:35:41.072700 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:35:41.072728 532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 12:35:41.155398 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 12:35:41.155426 532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 12:35:41.194887 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:35:41.230877 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 12:35:41.230909 532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 12:35:41.313376 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 12:35:41.313400 532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 12:35:41.442010 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 12:35:41.442049 532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 12:35:41.486996 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 12:35:41.487028 532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 12:35:41.616020 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 12:35:41.616057 532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 12:35:41.690855 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 12:35:41.690886 532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 12:35:41.720821 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:35:41.720851 532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 12:35:41.754849 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:35:41.990168 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
I0127 12:35:41.990220 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
I0127 12:35:41.990249 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990262 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990249 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990370 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990668 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.990683 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.990719 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.990725 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.990733 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.990747 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990758 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990821 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.990734 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990857 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.991027 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.991042 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.992412 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.992462 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.992477 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.004951 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.004969 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.005238 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.005254 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.005271 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:42.472191 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
I0127 12:35:42.472268 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.472283 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.472619 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:42.472665 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.472683 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.472697 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.472706 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.472985 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.473012 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.473024 532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
I0127 12:35:42.890307 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:43.165047 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
I0127 12:35:43.165103 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:43.165123 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:43.165633 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:43.165657 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:43.165676 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:43.165692 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:43.165705 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:43.165941 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:43.165957 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:43.167364 532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-215237 addons enable metrics-server
I0127 12:35:43.168535 532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 12:35:43.169652 532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 12:35:45.359702 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:46.359497 532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:46.359531 532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
I0127 12:35:46.359547 532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.867744 532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.867773 532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.867785 532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.872748 532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.872769 532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.872782 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.879135 532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.879153 532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.879170 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.884792 532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.884809 532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.884817 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.957535 532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.957564 532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.957577 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:48.358062 532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:48.358087 532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:48.358095 532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:48.358124 532344 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:35:48.358180 532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:35:48.381657 532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
I0127 12:35:48.381684 532344 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:35:48.381704 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:35:48.387590 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
ok
I0127 12:35:48.388765 532344 api_server.go:141] control plane version: v1.32.1
I0127 12:35:48.388787 532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
I0127 12:35:48.388795 532344 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:35:48.560605 532344 system_pods.go:59] 9 kube-system pods found
I0127 12:35:48.560642 532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
I0127 12:35:48.560650 532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
I0127 12:35:48.560656 532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
I0127 12:35:48.560659 532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
I0127 12:35:48.560663 532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
I0127 12:35:48.560666 532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
I0127 12:35:48.560671 532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
I0127 12:35:48.560680 532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:35:48.560686 532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
I0127 12:35:48.560696 532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
I0127 12:35:48.560709 532344 default_sa.go:34] waiting for default service account to be created ...
I0127 12:35:48.760164 532344 default_sa.go:45] found service account: "default"
I0127 12:35:48.760270 532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
I0127 12:35:48.760295 532344 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:35:48.961828 532344 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-215237 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215237 -n no-preload-215237
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-215237 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-215237 logs -n 25: (1.268182745s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| addons | enable metrics-server -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:31 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-215237 | no-preload-215237 | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | 27 Jan 25 12:30 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-215237 | no-preload-215237 | jenkins | v1.35.0 | 27 Jan 25 12:30 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p embed-certs-346100 | embed-certs-346100 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-346100 | embed-certs-346100 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-887672 | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-887672 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | |
| | default-k8s-diff-port-887672 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:31 UTC | 27 Jan 25 12:34 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | old-k8s-version-858845 image | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
| delete | -p old-k8s-version-858845 | old-k8s-version-858845 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
| start | -p newest-cni-610630 --memory=2200 --alsologtostderr | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:35 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:35 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-610630 --memory=2200 --alsologtostderr | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:35 UTC | 27 Jan 25 12:36 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | newest-cni-610630 image list | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
| delete | -p newest-cni-610630 | newest-cni-610630 | jenkins | v1.35.0 | 27 Jan 25 12:36 UTC | 27 Jan 25 12:36 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 12:35:43
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 12:35:43.059479 534894 out.go:345] Setting OutFile to fd 1 ...
I0127 12:35:43.059651 534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:35:43.059664 534894 out.go:358] Setting ErrFile to fd 2...
I0127 12:35:43.059671 534894 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:35:43.059931 534894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-471120/.minikube/bin
I0127 12:35:43.061091 534894 out.go:352] Setting JSON to false
I0127 12:35:43.062772 534894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11886,"bootTime":1737969457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 12:35:43.062914 534894 start.go:139] virtualization: kvm guest
I0127 12:35:43.064927 534894 out.go:177] * [newest-cni-610630] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 12:35:43.066246 534894 out.go:177] - MINIKUBE_LOCATION=20318
I0127 12:35:43.066268 534894 notify.go:220] Checking for updates...
I0127 12:35:43.068595 534894 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 12:35:43.069716 534894 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:35:43.070810 534894 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-471120/.minikube
I0127 12:35:43.071853 534894 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 12:35:43.072978 534894 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 12:35:43.074838 534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:35:43.075450 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:43.075519 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:43.091909 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
I0127 12:35:43.093149 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:43.093802 534894 main.go:141] libmachine: Using API Version 1
I0127 12:35:43.093834 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:43.094269 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:43.094579 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:35:43.094848 534894 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 12:35:43.095161 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:43.095202 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:43.110695 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
I0127 12:35:43.111212 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:43.111903 534894 main.go:141] libmachine: Using API Version 1
I0127 12:35:43.111935 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:43.112295 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:43.112533 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:35:43.153545 534894 out.go:177] * Using the kvm2 driver based on existing profile
I0127 12:35:40.799070 532344 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:35:40.816802 532344 node_ready.go:35] waiting up to 6m0s for node "no-preload-215237" to be "Ready" ...
I0127 12:35:40.842677 532344 node_ready.go:49] node "no-preload-215237" has status "Ready":"True"
I0127 12:35:40.842703 532344 node_ready.go:38] duration metric: took 25.862086ms for node "no-preload-215237" to be "Ready" ...
I0127 12:35:40.842716 532344 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:40.853263 532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
I0127 12:35:40.876376 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 12:35:40.876407 532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 12:35:40.898870 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:35:40.903314 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:35:40.916620 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 12:35:40.916649 532344 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 12:35:41.067992 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 12:35:41.068023 532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 12:35:41.072700 532344 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:35:41.072728 532344 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 12:35:41.155398 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 12:35:41.155426 532344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 12:35:41.194887 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:35:41.230877 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 12:35:41.230909 532344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 12:35:41.313376 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 12:35:41.313400 532344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 12:35:41.442010 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 12:35:41.442049 532344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 12:35:41.486996 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 12:35:41.487028 532344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 12:35:41.616020 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 12:35:41.616057 532344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 12:35:41.690855 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 12:35:41.690886 532344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 12:35:41.720821 532344 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:35:41.720851 532344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 12:35:41.754849 532344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:35:41.990168 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091255427s)
I0127 12:35:41.990220 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.086878371s)
I0127 12:35:41.990249 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990262 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990249 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990370 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990668 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.990683 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.990719 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.990725 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.990733 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.990747 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990758 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.990821 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.990734 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:41.990857 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:41.991027 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.991042 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:41.992412 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:41.992462 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:41.992477 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.004951 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.004969 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.005238 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.005254 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.005271 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:42.472191 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277235038s)
I0127 12:35:42.472268 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.472283 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.472619 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:42.472665 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.472683 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.472697 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:42.472706 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:42.472985 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:42.473012 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:42.473024 532344 addons.go:479] Verifying addon metrics-server=true in "no-preload-215237"
I0127 12:35:42.890307 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:43.165047 532344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.410145551s)
I0127 12:35:43.165103 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:43.165123 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:43.165633 532344 main.go:141] libmachine: (no-preload-215237) DBG | Closing plugin on server side
I0127 12:35:43.165657 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:43.165676 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:43.165692 532344 main.go:141] libmachine: Making call to close driver server
I0127 12:35:43.165705 532344 main.go:141] libmachine: (no-preload-215237) Calling .Close
I0127 12:35:43.165941 532344 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:35:43.165957 532344 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:35:43.167364 532344 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-215237 addons enable metrics-server
I0127 12:35:43.168535 532344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 12:35:43.154513 534894 start.go:297] selected driver: kvm2
I0127 12:35:43.154531 534894 start.go:901] validating driver "kvm2" against &{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:35:43.154653 534894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 12:35:43.155362 534894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:35:43.155469 534894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20318-471120/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 12:35:43.172617 534894 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 12:35:43.173026 534894 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 12:35:43.173063 534894 cni.go:84] Creating CNI manager for ""
I0127 12:35:43.173110 534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:35:43.173145 534894 start.go:340] cluster config:
{Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:35:43.173269 534894 iso.go:125] acquiring lock: {Name:mkc6ca3cbb5528e67f6dc9da0188f358e9fee620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:35:43.174747 534894 out.go:177] * Starting "newest-cni-610630" primary control-plane node in "newest-cni-610630" cluster
I0127 12:35:43.175803 534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:35:43.175846 534894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 12:35:43.175857 534894 cache.go:56] Caching tarball of preloaded images
I0127 12:35:43.175957 534894 preload.go:172] Found /home/jenkins/minikube-integration/20318-471120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 12:35:43.175970 534894 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 12:35:43.176077 534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
I0127 12:35:43.176271 534894 start.go:360] acquireMachinesLock for newest-cni-610630: {Name:mk818835aef0de701295cc2c98fea95e1be33202 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 12:35:43.176324 534894 start.go:364] duration metric: took 32.573µs to acquireMachinesLock for "newest-cni-610630"
I0127 12:35:43.176345 534894 start.go:96] Skipping create...Using existing machine configuration
I0127 12:35:43.176356 534894 fix.go:54] fixHost starting:
I0127 12:35:43.176686 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:35:43.176750 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:35:43.191549 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
I0127 12:35:43.191935 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:35:43.192419 534894 main.go:141] libmachine: Using API Version 1
I0127 12:35:43.192448 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:35:43.192934 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:35:43.193138 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:35:43.193300 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:35:43.195116 534894 fix.go:112] recreateIfNeeded on newest-cni-610630: state=Stopped err=<nil>
I0127 12:35:43.195141 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
W0127 12:35:43.195320 534894 fix.go:138] unexpected machine state, will restart: <nil>
I0127 12:35:43.196456 534894 out.go:177] * Restarting existing kvm2 VM for "newest-cni-610630" ...
I0127 12:35:43.169652 532344 addons.go:514] duration metric: took 2.587685868s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 12:35:45.359702 532344 pod_ready.go:103] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:42.352585 532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:44.353035 532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:46.353087 532607 pod_ready.go:103] pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:44.707430 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:46.708896 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:43.197457 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Start
I0127 12:35:43.197621 534894 main.go:141] libmachine: (newest-cni-610630) starting domain...
I0127 12:35:43.197646 534894 main.go:141] libmachine: (newest-cni-610630) ensuring networks are active...
I0127 12:35:43.198412 534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network default is active
I0127 12:35:43.198762 534894 main.go:141] libmachine: (newest-cni-610630) Ensuring network mk-newest-cni-610630 is active
I0127 12:35:43.199182 534894 main.go:141] libmachine: (newest-cni-610630) getting domain XML...
I0127 12:35:43.199981 534894 main.go:141] libmachine: (newest-cni-610630) creating domain...
I0127 12:35:44.514338 534894 main.go:141] libmachine: (newest-cni-610630) waiting for IP...
I0127 12:35:44.515307 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:44.515803 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:44.515875 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.515771 534929 retry.go:31] will retry after 248.83242ms: waiting for domain to come up
I0127 12:35:44.766511 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:44.767046 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:44.767081 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:44.767011 534929 retry.go:31] will retry after 381.268975ms: waiting for domain to come up
I0127 12:35:45.149680 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:45.150281 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:45.150314 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.150226 534929 retry.go:31] will retry after 435.74049ms: waiting for domain to come up
I0127 12:35:45.587978 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:45.588682 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:45.588719 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:45.588634 534929 retry.go:31] will retry after 577.775914ms: waiting for domain to come up
I0127 12:35:46.168596 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:46.169297 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:46.169332 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.169238 534929 retry.go:31] will retry after 539.718923ms: waiting for domain to come up
I0127 12:35:46.711082 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:46.711652 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:46.711676 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:46.711635 534929 retry.go:31] will retry after 607.430128ms: waiting for domain to come up
I0127 12:35:47.320403 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:47.320941 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:47.321006 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:47.320921 534929 retry.go:31] will retry after 772.973348ms: waiting for domain to come up
I0127 12:35:46.359497 532344 pod_ready.go:93] pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:46.359531 532344 pod_ready.go:82] duration metric: took 5.506181911s for pod "coredns-668d6bf9bc-v9stn" in "kube-system" namespace to be "Ready" ...
I0127 12:35:46.359547 532344 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.867744 532344 pod_ready.go:93] pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.867773 532344 pod_ready.go:82] duration metric: took 1.508215371s for pod "coredns-668d6bf9bc-wwb9p" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.867785 532344 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.872748 532344 pod_ready.go:93] pod "etcd-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.872769 532344 pod_ready.go:82] duration metric: took 4.975217ms for pod "etcd-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.872782 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.879135 532344 pod_ready.go:93] pod "kube-apiserver-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.879153 532344 pod_ready.go:82] duration metric: took 6.364009ms for pod "kube-apiserver-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.879170 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.884792 532344 pod_ready.go:93] pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.884809 532344 pod_ready.go:82] duration metric: took 5.632068ms for pod "kube-controller-manager-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.884817 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.957535 532344 pod_ready.go:93] pod "kube-proxy-bbnm2" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:47.957564 532344 pod_ready.go:82] duration metric: took 72.739132ms for pod "kube-proxy-bbnm2" in "kube-system" namespace to be "Ready" ...
I0127 12:35:47.957577 532344 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:48.358062 532344 pod_ready.go:93] pod "kube-scheduler-no-preload-215237" in "kube-system" namespace has status "Ready":"True"
I0127 12:35:48.358087 532344 pod_ready.go:82] duration metric: took 400.502078ms for pod "kube-scheduler-no-preload-215237" in "kube-system" namespace to be "Ready" ...
I0127 12:35:48.358095 532344 pod_ready.go:39] duration metric: took 7.515367235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:48.358124 532344 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:35:48.358180 532344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:35:48.381657 532344 api_server.go:72] duration metric: took 7.799751759s to wait for apiserver process to appear ...
I0127 12:35:48.381684 532344 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:35:48.381704 532344 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
I0127 12:35:48.387590 532344 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
ok
I0127 12:35:48.388765 532344 api_server.go:141] control plane version: v1.32.1
I0127 12:35:48.388787 532344 api_server.go:131] duration metric: took 7.09706ms to wait for apiserver health ...
I0127 12:35:48.388795 532344 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:35:48.560605 532344 system_pods.go:59] 9 kube-system pods found
I0127 12:35:48.560642 532344 system_pods.go:61] "coredns-668d6bf9bc-v9stn" [011e6981-39d0-4fa1-bf1b-3d1e06c7c71a] Running
I0127 12:35:48.560650 532344 system_pods.go:61] "coredns-668d6bf9bc-wwb9p" [0a034560-980a-40fb-9603-be18d02b6f05] Running
I0127 12:35:48.560656 532344 system_pods.go:61] "etcd-no-preload-215237" [8b9ab7f2-224f-4373-9dc2-fa794a60d922] Running
I0127 12:35:48.560659 532344 system_pods.go:61] "kube-apiserver-no-preload-215237" [064e0d8e-5d82-42bb-979d-cd0e9aa13f56] Running
I0127 12:35:48.560663 532344 system_pods.go:61] "kube-controller-manager-no-preload-215237" [dd9c190f-c01e-4fa7-b033-57463b032d30] Running
I0127 12:35:48.560666 532344 system_pods.go:61] "kube-proxy-bbnm2" [dd89ae69-6ad2-44cb-9c80-ba5529e22dc1] Running
I0127 12:35:48.560671 532344 system_pods.go:61] "kube-scheduler-no-preload-215237" [41c25fba-7af8-4e0e-b96d-57be786d703c] Running
I0127 12:35:48.560680 532344 system_pods.go:61] "metrics-server-f79f97bbb-lqck5" [3447c2da-cbb0-412c-a8d9-2be32c8e6dad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:35:48.560686 532344 system_pods.go:61] "storage-provisioner" [9627d136-2ecb-4cc3-969d-b62de2261147] Running
I0127 12:35:48.560696 532344 system_pods.go:74] duration metric: took 171.894881ms to wait for pod list to return data ...
I0127 12:35:48.560709 532344 default_sa.go:34] waiting for default service account to be created ...
I0127 12:35:48.760164 532344 default_sa.go:45] found service account: "default"
I0127 12:35:48.760270 532344 default_sa.go:55] duration metric: took 199.548191ms for default service account to be created ...
I0127 12:35:48.760295 532344 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:35:48.961828 532344 system_pods.go:87] 9 kube-system pods found
I0127 12:35:48.846560 532607 pod_ready.go:82] duration metric: took 4m0.000837349s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" ...
E0127 12:35:48.846588 532607 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-7qdhh" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 12:35:48.846609 532607 pod_ready.go:39] duration metric: took 4m15.043496386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:35:48.846642 532607 kubeadm.go:597] duration metric: took 4m22.373102966s to restartPrimaryControlPlane
W0127 12:35:48.846704 532607 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 12:35:48.846732 532607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 12:35:51.040149 532607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.193395005s)
I0127 12:35:51.040242 532607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 12:35:51.059048 532607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:35:51.071298 532607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:35:51.083050 532607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:35:51.083071 532607 kubeadm.go:157] found existing configuration files:
I0127 12:35:51.083125 532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:35:51.095124 532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:35:51.095208 532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:35:51.109222 532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:35:51.120314 532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:35:51.120390 532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:35:51.129841 532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:35:51.138490 532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:35:51.138545 532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:35:51.148658 532607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:35:51.157842 532607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:35:51.157894 532607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:35:51.167146 532607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 12:35:51.220576 532607 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 12:35:51.220796 532607 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 12:35:51.342653 532607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 12:35:51.342830 532607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 12:35:51.343020 532607 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 12:35:51.348865 532607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 12:35:51.351235 532607 out.go:235] - Generating certificates and keys ...
I0127 12:35:51.351355 532607 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 12:35:51.351445 532607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 12:35:51.351549 532607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 12:35:51.351635 532607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 12:35:51.351728 532607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 12:35:51.351801 532607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 12:35:51.351908 532607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 12:35:51.352000 532607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 12:35:51.352111 532607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 12:35:51.352262 532607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 12:35:51.352422 532607 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 12:35:51.352546 532607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 12:35:51.416524 532607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 12:35:51.666997 532607 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 12:35:51.867237 532607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 12:35:52.007584 532607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 12:35:52.100986 532607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 12:35:52.101889 532607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 12:35:52.105806 532607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 12:35:52.107605 532607 out.go:235] - Booting up control plane ...
I0127 12:35:52.107745 532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 12:35:52.108083 532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 12:35:52.109913 532607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 12:35:52.146307 532607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 12:35:52.156130 532607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 12:35:52.156211 532607 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 12:35:52.316523 532607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 12:35:52.316653 532607 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 12:35:48.711637 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:51.208760 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:48.096119 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:48.096791 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:48.096823 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:48.096728 534929 retry.go:31] will retry after 1.301268199s: waiting for domain to come up
I0127 12:35:49.400077 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:49.400697 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:49.400729 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:49.400664 534929 retry.go:31] will retry after 1.62599798s: waiting for domain to come up
I0127 12:35:51.029156 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:51.029715 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:51.029746 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:51.029706 534929 retry.go:31] will retry after 1.477748588s: waiting for domain to come up
I0127 12:35:52.509484 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:52.510252 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:52.510299 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:52.510150 534929 retry.go:31] will retry after 1.875473187s: waiting for domain to come up
I0127 12:35:53.322303 532607 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005635238s
I0127 12:35:53.322436 532607 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 12:35:53.708069 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:56.209743 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:54.387170 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:54.387808 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:54.387840 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:54.387764 534929 retry.go:31] will retry after 2.219284161s: waiting for domain to come up
I0127 12:35:56.609666 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:56.610140 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:56.610163 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:56.610112 534929 retry.go:31] will retry after 3.124115638s: waiting for domain to come up
I0127 12:35:58.324673 532607 kubeadm.go:310] [api-check] The API server is healthy after 5.002577765s
I0127 12:35:58.341207 532607 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 12:35:58.354763 532607 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 12:35:58.376218 532607 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 12:35:58.376468 532607 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-346100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 12:35:58.389424 532607 kubeadm.go:310] [bootstrap-token] Using token: 5069a0.5f3g1pdxhpmrcoga
I0127 12:35:58.390773 532607 out.go:235] - Configuring RBAC rules ...
I0127 12:35:58.390901 532607 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 12:35:58.397069 532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 12:35:58.405069 532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 12:35:58.409291 532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 12:35:58.412914 532607 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 12:35:58.415499 532607 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 12:35:58.732028 532607 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 12:35:59.154936 532607 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 12:35:59.732670 532607 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 12:35:59.734653 532607 kubeadm.go:310]
I0127 12:35:59.734754 532607 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 12:35:59.734788 532607 kubeadm.go:310]
I0127 12:35:59.734919 532607 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 12:35:59.734933 532607 kubeadm.go:310]
I0127 12:35:59.734978 532607 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 12:35:59.735094 532607 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 12:35:59.735193 532607 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 12:35:59.735206 532607 kubeadm.go:310]
I0127 12:35:59.735295 532607 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 12:35:59.735316 532607 kubeadm.go:310]
I0127 12:35:59.735384 532607 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 12:35:59.735392 532607 kubeadm.go:310]
I0127 12:35:59.735463 532607 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 12:35:59.735570 532607 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 12:35:59.735692 532607 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 12:35:59.735707 532607 kubeadm.go:310]
I0127 12:35:59.735853 532607 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 12:35:59.735964 532607 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 12:35:59.735986 532607 kubeadm.go:310]
I0127 12:35:59.736104 532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
I0127 12:35:59.736265 532607 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
I0127 12:35:59.736299 532607 kubeadm.go:310] --control-plane
I0127 12:35:59.736312 532607 kubeadm.go:310]
I0127 12:35:59.736432 532607 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 12:35:59.736441 532607 kubeadm.go:310]
I0127 12:35:59.736583 532607 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5069a0.5f3g1pdxhpmrcoga \
I0127 12:35:59.736761 532607 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337
I0127 12:35:59.738118 532607 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 12:35:59.738152 532607 cni.go:84] Creating CNI manager for ""
I0127 12:35:59.738162 532607 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:35:59.739901 532607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:35:59.741063 532607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:35:59.759536 532607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:35:59.777178 532607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:35:59.777199 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:59.777236 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-346100 minikube.k8s.io/updated_at=2025_01_27T12_35_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=embed-certs-346100 minikube.k8s.io/primary=true
I0127 12:35:59.974092 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:59.974117 532607 ops.go:34] apiserver oom_adj: -16
I0127 12:36:00.474716 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:00.974693 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:01.474216 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:01.974205 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:35:58.707466 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:01.206257 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:35:59.736004 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:35:59.736626 534894 main.go:141] libmachine: (newest-cni-610630) DBG | unable to find current IP address of domain newest-cni-610630 in network mk-newest-cni-610630
I0127 12:35:59.736649 534894 main.go:141] libmachine: (newest-cni-610630) DBG | I0127 12:35:59.736597 534929 retry.go:31] will retry after 3.849528984s: waiting for domain to come up
I0127 12:36:02.475052 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:02.975120 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:03.474457 532607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:03.577041 532607 kubeadm.go:1113] duration metric: took 3.799909499s to wait for elevateKubeSystemPrivileges
I0127 12:36:03.577092 532607 kubeadm.go:394] duration metric: took 4m37.171719699s to StartCluster
I0127 12:36:03.577128 532607 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:03.577224 532607 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:36:03.579144 532607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:03.579423 532607 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 12:36:03.579505 532607 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:36:03.579620 532607 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-346100"
I0127 12:36:03.579641 532607 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-346100"
W0127 12:36:03.579650 532607 addons.go:247] addon storage-provisioner should already be in state true
I0127 12:36:03.579651 532607 addons.go:69] Setting default-storageclass=true in profile "embed-certs-346100"
I0127 12:36:03.579676 532607 host.go:66] Checking if "embed-certs-346100" exists ...
I0127 12:36:03.579688 532607 config.go:182] Loaded profile config "embed-certs-346100": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:36:03.579700 532607 addons.go:69] Setting dashboard=true in profile "embed-certs-346100"
I0127 12:36:03.579723 532607 addons.go:238] Setting addon dashboard=true in "embed-certs-346100"
I0127 12:36:03.579715 532607 addons.go:69] Setting metrics-server=true in profile "embed-certs-346100"
W0127 12:36:03.579740 532607 addons.go:247] addon dashboard should already be in state true
I0127 12:36:03.579694 532607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-346100"
I0127 12:36:03.579749 532607 addons.go:238] Setting addon metrics-server=true in "embed-certs-346100"
W0127 12:36:03.579764 532607 addons.go:247] addon metrics-server should already be in state true
I0127 12:36:03.579779 532607 host.go:66] Checking if "embed-certs-346100" exists ...
I0127 12:36:03.579800 532607 host.go:66] Checking if "embed-certs-346100" exists ...
I0127 12:36:03.580054 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.580088 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.580101 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.580150 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.580190 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.580215 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.580233 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.580258 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.581024 532607 out.go:177] * Verifying Kubernetes components...
I0127 12:36:03.582429 532607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:36:03.598339 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
I0127 12:36:03.598375 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
I0127 12:36:03.598838 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.598892 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
I0127 12:36:03.598919 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.599306 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.599470 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.599486 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.599497 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.599511 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.599722 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.599738 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.599912 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.599974 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.600223 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.600494 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.600530 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.600545 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.600578 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.600674 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.600699 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.600881 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
I0127 12:36:03.601524 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.602100 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.602116 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.602471 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.602687 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
I0127 12:36:03.606648 532607 addons.go:238] Setting addon default-storageclass=true in "embed-certs-346100"
W0127 12:36:03.606677 532607 addons.go:247] addon default-storageclass should already be in state true
I0127 12:36:03.606709 532607 host.go:66] Checking if "embed-certs-346100" exists ...
I0127 12:36:03.607067 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.607104 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.619967 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
I0127 12:36:03.620348 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
I0127 12:36:03.620623 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.620935 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.621427 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.621447 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.621789 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.621804 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.621998 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.622221 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
I0127 12:36:03.622273 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.622543 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
I0127 12:36:03.624486 532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
I0127 12:36:03.624677 532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
I0127 12:36:03.625420 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43503
I0127 12:36:03.626112 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.626167 532607 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 12:36:03.626180 532607 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:36:03.626583 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.626602 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.626611 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
I0127 12:36:03.626942 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.627027 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.627437 532607 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:03.627453 532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:36:03.627464 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.627467 532607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:03.627475 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.627504 532607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:03.627471 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
I0127 12:36:03.627836 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.628149 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
I0127 12:36:03.628561 532607 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 12:36:03.629535 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 12:36:03.629551 532607 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 12:36:03.629575 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
I0127 12:36:03.630434 532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
I0127 12:36:03.631724 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.632213 532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
I0127 12:36:03.632232 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.632448 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
I0127 12:36:03.632593 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
I0127 12:36:03.632682 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
I0127 12:36:03.632867 532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
I0127 12:36:03.632996 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.633161 532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
I0127 12:36:03.633189 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.633418 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
I0127 12:36:03.633573 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
I0127 12:36:03.633701 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
I0127 12:36:03.633812 532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
I0127 12:36:03.634247 532607 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 12:36:03.635266 532607 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 12:36:03.635284 532607 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 12:36:03.635305 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
I0127 12:36:03.637878 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.638306 532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
I0127 12:36:03.638338 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.638542 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
I0127 12:36:03.638697 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
I0127 12:36:03.638867 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
I0127 12:36:03.639116 532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
I0127 12:36:03.643537 532607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
I0127 12:36:03.643881 532607 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:03.644309 532607 main.go:141] libmachine: Using API Version 1
I0127 12:36:03.644327 532607 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:03.644644 532607 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:03.644952 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetState
I0127 12:36:03.646128 532607 main.go:141] libmachine: (embed-certs-346100) Calling .DriverName
I0127 12:36:03.646325 532607 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:03.646341 532607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:36:03.646358 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHHostname
I0127 12:36:03.649282 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.649641 532607 main.go:141] libmachine: (embed-certs-346100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cd:c0", ip: ""} in network mk-embed-certs-346100: {Iface:virbr1 ExpiryTime:2025-01-27 13:31:13 +0000 UTC Type:0 Mac:52:54:00:8f:cd:c0 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:embed-certs-346100 Clientid:01:52:54:00:8f:cd:c0}
I0127 12:36:03.649669 532607 main.go:141] libmachine: (embed-certs-346100) DBG | domain embed-certs-346100 has defined IP address 192.168.50.206 and MAC address 52:54:00:8f:cd:c0 in network mk-embed-certs-346100
I0127 12:36:03.649910 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHPort
I0127 12:36:03.650077 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHKeyPath
I0127 12:36:03.650198 532607 main.go:141] libmachine: (embed-certs-346100) Calling .GetSSHUsername
I0127 12:36:03.650298 532607 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/embed-certs-346100/id_rsa Username:docker}
I0127 12:36:03.805663 532607 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:36:03.824512 532607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-346100" to be "Ready" ...
I0127 12:36:03.856505 532607 node_ready.go:49] node "embed-certs-346100" has status "Ready":"True"
I0127 12:36:03.856540 532607 node_ready.go:38] duration metric: took 31.977019ms for node "embed-certs-346100" to be "Ready" ...
I0127 12:36:03.856555 532607 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:36:03.863683 532607 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:03.902624 532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:03.925389 532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:03.977654 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 12:36:03.977686 532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 12:36:04.012033 532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 12:36:04.012063 532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 12:36:04.029962 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 12:36:04.029991 532607 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 12:36:04.076532 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 12:36:04.076565 532607 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 12:36:04.136201 532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 12:36:04.136229 532607 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 12:36:04.142268 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 12:36:04.142293 532607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 12:36:04.174895 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 12:36:04.174919 532607 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 12:36:04.185938 532607 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:04.185959 532607 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 12:36:04.204606 532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:04.226546 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 12:36:04.226574 532607 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 12:36:04.340411 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 12:36:04.340438 532607 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 12:36:04.424847 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:04.424878 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:04.425230 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:04.425269 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:04.425293 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:04.425304 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:04.425329 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:04.425596 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:04.425613 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:04.425627 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:04.443059 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:04.443080 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:04.443380 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:04.443404 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:04.457532 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 12:36:04.457557 532607 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 12:36:04.529771 532607 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:04.529803 532607 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 12:36:04.581907 532607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:05.466462 532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.541011177s)
I0127 12:36:05.466526 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:05.466544 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:05.466865 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:05.466934 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:05.466947 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:05.466957 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:05.466969 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:05.467283 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:05.467328 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:05.467300 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:05.677171 532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472522816s)
I0127 12:36:05.677230 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:05.677244 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:05.677645 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:05.677684 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:05.677699 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:05.677711 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:05.677723 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:05.678056 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:05.678091 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:05.678115 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:05.678132 532607 addons.go:479] Verifying addon metrics-server=true in "embed-certs-346100"
I0127 12:36:05.870203 532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:06.503934 532607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.921960102s)
I0127 12:36:06.504007 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:06.504025 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:06.504372 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:06.504489 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:06.504506 532607 main.go:141] libmachine: Making call to close driver server
I0127 12:36:06.504514 532607 main.go:141] libmachine: (embed-certs-346100) Calling .Close
I0127 12:36:06.504460 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:06.504814 532607 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:06.504834 532607 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:06.504835 532607 main.go:141] libmachine: (embed-certs-346100) DBG | Closing plugin on server side
I0127 12:36:06.506475 532607 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-346100 addons enable metrics-server
I0127 12:36:06.507672 532607 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 12:36:06.508878 532607 addons.go:514] duration metric: took 2.929397312s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 12:36:03.587872 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.588437 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has current primary IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.588458 534894 main.go:141] libmachine: (newest-cni-610630) found domain IP: 192.168.39.228
I0127 12:36:03.588471 534894 main.go:141] libmachine: (newest-cni-610630) reserving static IP address...
I0127 12:36:03.589076 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.589105 534894 main.go:141] libmachine: (newest-cni-610630) reserved static IP address 192.168.39.228 for domain newest-cni-610630
I0127 12:36:03.589131 534894 main.go:141] libmachine: (newest-cni-610630) DBG | skip adding static IP to network mk-newest-cni-610630 - found existing host DHCP lease matching {name: "newest-cni-610630", mac: "52:54:00:49:61:34", ip: "192.168.39.228"}
I0127 12:36:03.589141 534894 main.go:141] libmachine: (newest-cni-610630) waiting for SSH...
I0127 12:36:03.589165 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Getting to WaitForSSH function...
I0127 12:36:03.592182 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.592771 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.592796 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.593171 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH client type: external
I0127 12:36:03.593190 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Using SSH private key: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa (-rw-------)
I0127 12:36:03.593218 534894 main.go:141] libmachine: (newest-cni-610630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 12:36:03.593228 534894 main.go:141] libmachine: (newest-cni-610630) DBG | About to run SSH command:
I0127 12:36:03.593239 534894 main.go:141] libmachine: (newest-cni-610630) DBG | exit 0
I0127 12:36:03.733183 534894 main.go:141] libmachine: (newest-cni-610630) DBG | SSH cmd err, output: <nil>:
I0127 12:36:03.733566 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetConfigRaw
I0127 12:36:03.734338 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
I0127 12:36:03.737083 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.737511 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.737553 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.737875 534894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/config.json ...
I0127 12:36:03.738075 534894 machine.go:93] provisionDockerMachine start ...
I0127 12:36:03.738099 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:03.738370 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:03.741025 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.741354 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.741384 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.741566 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:03.741756 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.741966 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.742141 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:03.742356 534894 main.go:141] libmachine: Using SSH client type: native
I0127 12:36:03.742588 534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.228 22 <nil> <nil>}
I0127 12:36:03.742604 534894 main.go:141] libmachine: About to run SSH command:
hostname
I0127 12:36:03.853610 534894 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 12:36:03.853641 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
I0127 12:36:03.853921 534894 buildroot.go:166] provisioning hostname "newest-cni-610630"
I0127 12:36:03.853957 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
I0127 12:36:03.854185 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:03.857441 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.857928 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.857961 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.858074 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:03.858293 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.858504 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.858678 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:03.858886 534894 main.go:141] libmachine: Using SSH client type: native
I0127 12:36:03.859093 534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.228 22 <nil> <nil>}
I0127 12:36:03.859120 534894 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-610630 && echo "newest-cni-610630" | sudo tee /etc/hostname
I0127 12:36:03.986908 534894 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-610630
I0127 12:36:03.986946 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:03.990070 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.990587 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:03.990628 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:03.990879 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:03.991122 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.991299 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:03.991452 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:03.991678 534894 main.go:141] libmachine: Using SSH client type: native
I0127 12:36:03.991897 534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.228 22 <nil> <nil>}
I0127 12:36:03.991926 534894 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-610630' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-610630/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-610630' | sudo tee -a /etc/hosts;
fi
fi
I0127 12:36:04.113288 534894 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 12:36:04.113333 534894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20318-471120/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-471120/.minikube}
I0127 12:36:04.113360 534894 buildroot.go:174] setting up certificates
I0127 12:36:04.113382 534894 provision.go:84] configureAuth start
I0127 12:36:04.113398 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetMachineName
I0127 12:36:04.113676 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
I0127 12:36:04.116365 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.116714 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.116764 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.117068 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.119378 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.119713 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.119736 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.119918 534894 provision.go:143] copyHostCerts
I0127 12:36:04.119990 534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem, removing ...
I0127 12:36:04.120016 534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem
I0127 12:36:04.120102 534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/ca.pem (1082 bytes)
I0127 12:36:04.120256 534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem, removing ...
I0127 12:36:04.120274 534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem
I0127 12:36:04.120316 534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/cert.pem (1123 bytes)
I0127 12:36:04.120402 534894 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem, removing ...
I0127 12:36:04.120415 534894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem
I0127 12:36:04.120457 534894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-471120/.minikube/key.pem (1679 bytes)
I0127 12:36:04.120535 534894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem org=jenkins.newest-cni-610630 san=[127.0.0.1 192.168.39.228 localhost minikube newest-cni-610630]
I0127 12:36:04.308578 534894 provision.go:177] copyRemoteCerts
I0127 12:36:04.308646 534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 12:36:04.308681 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.311740 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.312147 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.312181 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.312367 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:04.312539 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.312718 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:04.312951 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:04.406421 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 12:36:04.434493 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 12:36:04.458820 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 12:36:04.483270 534894 provision.go:87] duration metric: took 369.872198ms to configureAuth
I0127 12:36:04.483307 534894 buildroot.go:189] setting minikube options for container-runtime
I0127 12:36:04.483583 534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:36:04.483608 534894 machine.go:96] duration metric: took 745.518388ms to provisionDockerMachine
I0127 12:36:04.483622 534894 start.go:293] postStartSetup for "newest-cni-610630" (driver="kvm2")
I0127 12:36:04.483638 534894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 12:36:04.483676 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:04.484046 534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 12:36:04.484091 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.487237 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.487689 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.487724 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.487930 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:04.488140 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.488365 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:04.488527 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:04.578283 534894 ssh_runner.go:195] Run: cat /etc/os-release
I0127 12:36:04.583274 534894 info.go:137] Remote host: Buildroot 2023.02.9
I0127 12:36:04.583302 534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/addons for local assets ...
I0127 12:36:04.583381 534894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-471120/.minikube/files for local assets ...
I0127 12:36:04.583480 534894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem -> 4783872.pem in /etc/ssl/certs
I0127 12:36:04.583597 534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 12:36:04.594213 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /etc/ssl/certs/4783872.pem (1708 bytes)
I0127 12:36:04.618506 534894 start.go:296] duration metric: took 134.861455ms for postStartSetup
I0127 12:36:04.618569 534894 fix.go:56] duration metric: took 21.442212309s for fixHost
I0127 12:36:04.618601 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.621910 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.622352 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.622388 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.622670 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:04.622872 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.623064 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.623231 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:04.623434 534894 main.go:141] libmachine: Using SSH client type: native
I0127 12:36:04.623683 534894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.228 22 <nil> <nil>}
I0127 12:36:04.623701 534894 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 12:36:04.745637 534894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981364.720376969
I0127 12:36:04.745668 534894 fix.go:216] guest clock: 1737981364.720376969
I0127 12:36:04.745677 534894 fix.go:229] Guest: 2025-01-27 12:36:04.720376969 +0000 UTC Remote: 2025-01-27 12:36:04.618576525 +0000 UTC m=+21.609424923 (delta=101.800444ms)
I0127 12:36:04.745704 534894 fix.go:200] guest clock delta is within tolerance: 101.800444ms
I0127 12:36:04.745711 534894 start.go:83] releasing machines lock for "newest-cni-610630", held for 21.569374077s
I0127 12:36:04.745742 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:04.746064 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
I0127 12:36:04.749116 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.749586 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.749623 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.749762 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:04.750369 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:04.750591 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:04.750714 534894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 12:36:04.750788 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.750841 534894 ssh_runner.go:195] Run: cat /version.json
I0127 12:36:04.750872 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:04.753604 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.753937 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.753995 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.754036 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.754117 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:04.754283 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.754435 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:04.754463 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:04.754505 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:04.754649 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:04.754824 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:04.754704 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:04.754972 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:04.755165 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:04.837766 534894 ssh_runner.go:195] Run: systemctl --version
I0127 12:36:04.870922 534894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 12:36:04.877067 534894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 12:36:04.877148 534894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 12:36:04.898288 534894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 12:36:04.898318 534894 start.go:495] detecting cgroup driver to use...
I0127 12:36:04.898407 534894 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 12:36:04.932879 534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 12:36:04.949987 534894 docker.go:217] disabling cri-docker service (if available) ...
I0127 12:36:04.950133 534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 12:36:04.967044 534894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 12:36:04.983091 534894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 12:36:05.124492 534894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 12:36:05.268901 534894 docker.go:233] disabling docker service ...
I0127 12:36:05.268987 534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 12:36:05.284320 534894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 12:36:05.298992 534894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 12:36:05.441228 534894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 12:36:05.609452 534894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 12:36:05.626916 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:36:05.647205 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 12:36:05.657704 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 12:36:05.667476 534894 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 12:36:05.667555 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 12:36:05.677468 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:36:05.688601 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 12:36:05.698702 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:36:05.710663 534894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 12:36:05.724221 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 12:36:05.737093 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 12:36:05.746742 534894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 12:36:05.756481 534894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 12:36:05.767282 534894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 12:36:05.767344 534894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 12:36:05.780026 534894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 12:36:05.791098 534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:36:05.930676 534894 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 12:36:05.966221 534894 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 12:36:05.966321 534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 12:36:05.971094 534894 retry.go:31] will retry after 1.421722911s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 12:36:07.393037 534894 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 12:36:07.398456 534894 start.go:563] Will wait 60s for crictl version
I0127 12:36:07.398530 534894 ssh_runner.go:195] Run: which crictl
I0127 12:36:07.402351 534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 12:36:07.446224 534894 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 12:36:07.446301 534894 ssh_runner.go:195] Run: containerd --version
I0127 12:36:07.473080 534894 ssh_runner.go:195] Run: containerd --version
I0127 12:36:07.497663 534894 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 12:36:07.498857 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetIP
I0127 12:36:07.501622 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:07.502032 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:07.502071 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:07.502274 534894 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0127 12:36:07.506028 534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:36:07.519964 534894 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0127 12:36:03.206663 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:05.207472 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:07.706605 532844 pod_ready.go:103] pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:07.521255 534894 kubeadm.go:883] updating cluster {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 12:36:07.521413 534894 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 12:36:07.521493 534894 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:36:07.554098 534894 containerd.go:627] all images are preloaded for containerd runtime.
I0127 12:36:07.554125 534894 containerd.go:534] Images already preloaded, skipping extraction
I0127 12:36:07.554187 534894 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:36:07.591861 534894 containerd.go:627] all images are preloaded for containerd runtime.
I0127 12:36:07.591890 534894 cache_images.go:84] Images are preloaded, skipping loading
I0127 12:36:07.591901 534894 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 containerd true true} ...
I0127 12:36:07.592033 534894 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-610630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 12:36:07.592107 534894 ssh_runner.go:195] Run: sudo crictl info
I0127 12:36:07.633013 534894 cni.go:84] Creating CNI manager for ""
I0127 12:36:07.633040 534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:36:07.633051 534894 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0127 12:36:07.633082 534894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-610630 NodeName:newest-cni-610630 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 12:36:07.633263 534894 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.228
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-610630"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.228"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 12:36:07.633336 534894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 12:36:07.643906 534894 binaries.go:44] Found k8s binaries, skipping transfer
I0127 12:36:07.643972 534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 12:36:07.653399 534894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0127 12:36:07.671016 534894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 12:36:07.691229 534894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
I0127 12:36:07.711891 534894 ssh_runner.go:195] Run: grep 192.168.39.228 control-plane.minikube.internal$ /etc/hosts
I0127 12:36:07.716614 534894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:36:07.730520 534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:36:07.852685 534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:36:07.870469 534894 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630 for IP: 192.168.39.228
I0127 12:36:07.870498 534894 certs.go:194] generating shared ca certs ...
I0127 12:36:07.870523 534894 certs.go:226] acquiring lock for ca certs: {Name:mk02d117412837bd489768267e2b174e6c3ff6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:07.870697 534894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key
I0127 12:36:07.870773 534894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key
I0127 12:36:07.870785 534894 certs.go:256] generating profile certs ...
I0127 12:36:07.870943 534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/client.key
I0127 12:36:07.871073 534894 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key.2ce4e80e
I0127 12:36:07.871140 534894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key
I0127 12:36:07.871291 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem (1338 bytes)
W0127 12:36:07.871334 534894 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387_empty.pem, impossibly tiny 0 bytes
I0127 12:36:07.871349 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca-key.pem (1679 bytes)
I0127 12:36:07.871394 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/ca.pem (1082 bytes)
I0127 12:36:07.871429 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/cert.pem (1123 bytes)
I0127 12:36:07.871461 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/certs/key.pem (1679 bytes)
I0127 12:36:07.871519 534894 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem (1708 bytes)
I0127 12:36:07.872415 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 12:36:07.904294 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 12:36:07.944289 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 12:36:07.979498 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 12:36:08.010836 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 12:36:08.041389 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 12:36:08.201622 532844 pod_ready.go:82] duration metric: took 4m0.001032286s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" ...
E0127 12:36:08.201658 532844 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-9kjfb" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 12:36:08.201683 532844 pod_ready.go:39] duration metric: took 4m14.040174083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:36:08.201724 532844 kubeadm.go:597] duration metric: took 4m21.555444284s to restartPrimaryControlPlane
W0127 12:36:08.201798 532844 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 12:36:08.201833 532844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 12:36:10.133466 532844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.93160232s)
I0127 12:36:10.133550 532844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 12:36:10.155296 532844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:36:10.170023 532844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:36:10.183165 532844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:36:10.183194 532844 kubeadm.go:157] found existing configuration files:
I0127 12:36:10.183257 532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
I0127 12:36:10.195175 532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:36:10.195253 532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:36:10.208349 532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
I0127 12:36:10.220351 532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:36:10.220429 532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:36:10.238914 532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
I0127 12:36:10.254995 532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:36:10.255067 532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:36:10.266753 532844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
I0127 12:36:10.278422 532844 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:36:10.278490 532844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:36:10.292279 532844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 12:36:10.351007 532844 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 12:36:10.351189 532844 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 12:36:10.469769 532844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 12:36:10.469949 532844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 12:36:10.470056 532844 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 12:36:10.479353 532844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 12:36:10.481858 532844 out.go:235] - Generating certificates and keys ...
I0127 12:36:10.481959 532844 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 12:36:10.482038 532844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 12:36:10.482135 532844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 12:36:10.482236 532844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 12:36:10.482358 532844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 12:36:10.482442 532844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 12:36:10.482525 532844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 12:36:10.482633 532844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 12:36:10.483039 532844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 12:36:10.483619 532844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 12:36:10.483746 532844 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 12:36:10.483829 532844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 12:36:10.585561 532844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 12:36:10.784195 532844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 12:36:10.958020 532844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 12:36:11.223196 532844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 12:36:11.439416 532844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 12:36:11.440271 532844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 12:36:11.444236 532844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 12:36:08.374973 532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:10.872073 532607 pod_ready.go:103] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:11.445766 532844 out.go:235] - Booting up control plane ...
I0127 12:36:11.445895 532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 12:36:11.445993 532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 12:36:11.447764 532844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 12:36:11.484418 532844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 12:36:11.496508 532844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 12:36:11.496594 532844 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 12:36:11.681886 532844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 12:36:11.682039 532844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 12:36:12.183183 532844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.076889ms
I0127 12:36:12.183305 532844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 12:36:08.074441 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 12:36:08.107699 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/profiles/newest-cni-610630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0127 12:36:08.137950 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/certs/478387.pem --> /usr/share/ca-certificates/478387.pem (1338 bytes)
I0127 12:36:08.163896 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/files/etc/ssl/certs/4783872.pem --> /usr/share/ca-certificates/4783872.pem (1708 bytes)
I0127 12:36:08.188493 534894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-471120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 12:36:08.217196 534894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 12:36:08.237633 534894 ssh_runner.go:195] Run: openssl version
I0127 12:36:08.244270 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478387.pem && ln -fs /usr/share/ca-certificates/478387.pem /etc/ssl/certs/478387.pem"
I0127 12:36:08.258544 534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478387.pem
I0127 12:36:08.264117 534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:28 /usr/share/ca-certificates/478387.pem
I0127 12:36:08.264194 534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478387.pem
I0127 12:36:08.271823 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478387.pem /etc/ssl/certs/51391683.0"
I0127 12:36:08.283160 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783872.pem && ln -fs /usr/share/ca-certificates/4783872.pem /etc/ssl/certs/4783872.pem"
I0127 12:36:08.293600 534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783872.pem
I0127 12:36:08.299046 534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:28 /usr/share/ca-certificates/4783872.pem
I0127 12:36:08.299115 534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783872.pem
I0127 12:36:08.306015 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783872.pem /etc/ssl/certs/3ec20f2e.0"
I0127 12:36:08.317692 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 12:36:08.328317 534894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 12:36:08.332856 534894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:21 /usr/share/ca-certificates/minikubeCA.pem
I0127 12:36:08.332912 534894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 12:36:08.342875 534894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 12:36:08.355240 534894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 12:36:08.363234 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 12:36:08.369655 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 12:36:08.377149 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 12:36:08.382739 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 12:36:08.388277 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 12:36:08.395644 534894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 12:36:08.403226 534894 kubeadm.go:392] StartCluster: {Name:newest-cni-610630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-610630 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:36:08.403325 534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 12:36:08.403369 534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 12:36:08.454071 534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
I0127 12:36:08.454100 534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
I0127 12:36:08.454104 534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
I0127 12:36:08.454108 534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
I0127 12:36:08.454118 534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
I0127 12:36:08.454123 534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
I0127 12:36:08.454127 534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
I0127 12:36:08.454130 534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
I0127 12:36:08.454134 534894 cri.go:89] found id: ""
I0127 12:36:08.454198 534894 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 12:36:08.472428 534894 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T12:36:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 12:36:08.472525 534894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 12:36:08.484156 534894 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 12:36:08.484183 534894 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 12:36:08.484255 534894 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 12:36:08.494975 534894 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 12:36:08.496360 534894 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-610630" does not appear in /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:36:08.497417 534894 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-471120/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-610630" cluster setting kubeconfig missing "newest-cni-610630" context setting]
I0127 12:36:08.498843 534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:08.501415 534894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 12:36:08.513111 534894 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.228
I0127 12:36:08.513147 534894 kubeadm.go:1160] stopping kube-system containers ...
I0127 12:36:08.513163 534894 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 12:36:08.513216 534894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 12:36:08.561176 534894 cri.go:89] found id: "05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2"
I0127 12:36:08.561203 534894 cri.go:89] found id: "a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250"
I0127 12:36:08.561209 534894 cri.go:89] found id: "357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271"
I0127 12:36:08.561214 534894 cri.go:89] found id: "8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b"
I0127 12:36:08.561218 534894 cri.go:89] found id: "631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a"
I0127 12:36:08.561223 534894 cri.go:89] found id: "3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13"
I0127 12:36:08.561227 534894 cri.go:89] found id: "ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76"
I0127 12:36:08.561231 534894 cri.go:89] found id: "0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c"
I0127 12:36:08.561235 534894 cri.go:89] found id: ""
I0127 12:36:08.561242 534894 cri.go:252] Stopping containers: [05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c]
I0127 12:36:08.561301 534894 ssh_runner.go:195] Run: which crictl
I0127 12:36:08.565588 534894 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05ffab9ec1df6a08be3cf4fff6f1bc2d34b936f85bb1eca01f0faa9c79b81ca2 a7a8b6e36dd9fd2a974679303088e27d310da467ae0b44b7e2bee01a313fb250 357a817781a0a2c2660b62d3147761ada65f892574b6896b131cba4fa7203271 8e4b9189f64d7d2af191278958debeb684b0c1e523e3427108539c6a95d2ba1b 631ec6fc6fa3674ba19cdf2652b115231ee41d673e732154c0bd56a516163f8a 3e5a0e300fdce1ad4301a269b5b03cbca2c80937aa3ed15c1763b05a166b3e13 ddcaaca610d5d252accfd4c9b01497daef2166865045b0b7a4e9dff690376d76 0c95d79f7c80adc359e2dc6a5bf31fdcedd8c4ee393022eafd7199769e04e77c
I0127 12:36:08.619372 534894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 12:36:08.636553 534894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:36:08.648359 534894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:36:08.648385 534894 kubeadm.go:157] found existing configuration files:
I0127 12:36:08.648439 534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:36:08.659186 534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:36:08.659257 534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:36:08.668828 534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:36:08.679551 534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:36:08.679624 534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:36:08.689530 534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:36:08.701111 534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:36:08.701164 534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:36:08.709830 534894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:36:08.718407 534894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:36:08.718495 534894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:36:08.727400 534894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:36:08.736296 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:08.887779 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:09.818917 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:10.080535 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:10.159744 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:10.232154 534894 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:36:10.232252 534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:10.732454 534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:11.233357 534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:11.264081 534894 api_server.go:72] duration metric: took 1.031921463s to wait for apiserver process to appear ...
I0127 12:36:11.264115 534894 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:36:11.264142 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:11.264724 534894 api_server.go:269] stopped: https://192.168.39.228:8443/healthz: Get "https://192.168.39.228:8443/healthz": dial tcp 192.168.39.228:8443: connect: connection refused
I0127 12:36:11.764442 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:14.358365 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 12:36:14.358472 534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 12:36:14.358502 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:14.408913 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 12:36:14.409034 534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 12:36:14.764463 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:14.771512 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 12:36:14.771584 534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 12:36:15.264813 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:15.270318 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 12:36:15.270344 534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 12:36:15.765063 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:15.772704 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 12:36:15.772774 534894 api_server.go:103] status: https://192.168.39.228:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 12:36:16.264285 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:16.271130 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
ok
I0127 12:36:16.281041 534894 api_server.go:141] control plane version: v1.32.1
I0127 12:36:16.281071 534894 api_server.go:131] duration metric: took 5.016947638s to wait for apiserver health ...
I0127 12:36:16.281087 534894 cni.go:84] Creating CNI manager for ""
I0127 12:36:16.281096 534894 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:36:16.282806 534894 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:36:16.284232 534894 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:36:16.297533 534894 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:36:16.314501 534894 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:36:16.324319 534894 system_pods.go:59] 9 kube-system pods found
I0127 12:36:16.324349 534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:16.324357 534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:16.324365 534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 12:36:16.324379 534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 12:36:16.324385 534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 12:36:16.324391 534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 12:36:16.324395 534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 12:36:16.324400 534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:36:16.324408 534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 12:36:16.324413 534894 system_pods.go:74] duration metric: took 9.892595ms to wait for pod list to return data ...
I0127 12:36:16.324424 534894 node_conditions.go:102] verifying NodePressure condition ...
I0127 12:36:16.327339 534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 12:36:16.327364 534894 node_conditions.go:123] node cpu capacity is 2
I0127 12:36:16.327385 534894 node_conditions.go:105] duration metric: took 2.956884ms to run NodePressure ...
I0127 12:36:16.327404 534894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 12:36:16.991253 534894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:36:17.011999 534894 ops.go:34] apiserver oom_adj: -16
I0127 12:36:17.012027 534894 kubeadm.go:597] duration metric: took 8.527837095s to restartPrimaryControlPlane
I0127 12:36:17.012040 534894 kubeadm.go:394] duration metric: took 8.608822701s to StartCluster
I0127 12:36:17.012072 534894 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:17.012204 534894 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:36:17.014682 534894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:17.015030 534894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 12:36:17.015158 534894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:36:17.015477 534894 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-610630"
I0127 12:36:17.015505 534894 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-610630"
I0127 12:36:17.015320 534894 config.go:182] Loaded profile config "newest-cni-610630": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:36:17.015542 534894 addons.go:69] Setting metrics-server=true in profile "newest-cni-610630"
I0127 12:36:17.015555 534894 addons.go:238] Setting addon metrics-server=true in "newest-cni-610630"
W0127 12:36:17.015562 534894 addons.go:247] addon metrics-server should already be in state true
I0127 12:36:17.015556 534894 addons.go:69] Setting default-storageclass=true in profile "newest-cni-610630"
I0127 12:36:17.015582 534894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-610630"
I0127 12:36:17.015588 534894 host.go:66] Checking if "newest-cni-610630" exists ...
I0127 12:36:17.015521 534894 addons.go:69] Setting dashboard=true in profile "newest-cni-610630"
I0127 12:36:17.015608 534894 addons.go:238] Setting addon dashboard=true in "newest-cni-610630"
W0127 12:36:17.015617 534894 addons.go:247] addon dashboard should already be in state true
I0127 12:36:17.015643 534894 host.go:66] Checking if "newest-cni-610630" exists ...
I0127 12:36:17.016040 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.016039 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.016050 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
W0127 12:36:17.015533 534894 addons.go:247] addon storage-provisioner should already be in state true
I0127 12:36:17.016079 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.016082 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.016083 534894 host.go:66] Checking if "newest-cni-610630" exists ...
I0127 12:36:17.016420 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.016423 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.016450 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.031224 534894 out.go:177] * Verifying Kubernetes components...
I0127 12:36:17.032914 534894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:36:17.036836 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
I0127 12:36:17.037340 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.037862 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.037882 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.038318 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.038866 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.038905 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.039846 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
I0127 12:36:17.040182 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.040873 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.040890 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.041292 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.041587 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:36:17.045301 534894 addons.go:238] Setting addon default-storageclass=true in "newest-cni-610630"
W0127 12:36:17.045320 534894 addons.go:247] addon default-storageclass should already be in state true
I0127 12:36:17.045352 534894 host.go:66] Checking if "newest-cni-610630" exists ...
I0127 12:36:17.045759 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.045799 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.048089 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
I0127 12:36:17.048729 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.049195 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.049213 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.049644 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.050180 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.050222 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.050700 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
I0127 12:36:17.051087 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.051560 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.051581 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.051971 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.052563 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.052600 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.065040 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
I0127 12:36:17.065537 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.066047 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.066072 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.066400 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.066556 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:36:17.068438 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:17.070276 534894 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 12:36:17.071684 534894 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 12:36:17.072821 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 12:36:17.072844 534894 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 12:36:17.072867 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:17.073985 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
I0127 12:36:17.074526 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.075082 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.075099 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.075677 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.076310 534894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:17.076356 534894 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:17.078889 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.079441 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:17.079463 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.079747 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:17.079954 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:17.080136 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:17.080333 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:17.091530 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
I0127 12:36:17.092126 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.092669 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.092694 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.093285 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.093437 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:36:17.095189 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
I0127 12:36:17.095304 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:17.095761 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.096341 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.096358 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.096828 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.097030 534894 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:36:17.097195 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:36:17.097833 534894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40641
I0127 12:36:17.098239 534894 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:17.098254 534894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:17.098271 534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:36:17.098299 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:17.098871 534894 main.go:141] libmachine: Using API Version 1
I0127 12:36:17.098889 534894 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:17.099255 534894 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:17.099465 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:17.099541 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetState
I0127 12:36:17.100856 534894 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 12:36:12.874242 532607 pod_ready.go:93] pod "etcd-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:12.874282 532607 pod_ready.go:82] duration metric: took 9.010574512s for pod "etcd-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.874303 532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.882689 532607 pod_ready.go:93] pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:12.882775 532607 pod_ready.go:82] duration metric: took 8.462495ms for pod "kube-apiserver-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.882801 532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.888659 532607 pod_ready.go:93] pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:12.888693 532607 pod_ready.go:82] duration metric: took 5.874272ms for pod "kube-controller-manager-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.888707 532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.894080 532607 pod_ready.go:93] pod "kube-proxy-smp6l" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:12.894141 532607 pod_ready.go:82] duration metric: took 5.425838ms for pod "kube-proxy-smp6l" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.894163 532607 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.900793 532607 pod_ready.go:93] pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:12.900849 532607 pod_ready.go:82] duration metric: took 6.668808ms for pod "kube-scheduler-embed-certs-346100" in "kube-system" namespace to be "Ready" ...
I0127 12:36:12.900869 532607 pod_ready.go:39] duration metric: took 9.044300135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:36:12.900904 532607 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:36:12.900995 532607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:12.922995 532607 api_server.go:72] duration metric: took 9.343524429s to wait for apiserver process to appear ...
I0127 12:36:12.923066 532607 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:36:12.923097 532607 api_server.go:253] Checking apiserver healthz at https://192.168.50.206:8443/healthz ...
I0127 12:36:12.930234 532607 api_server.go:279] https://192.168.50.206:8443/healthz returned 200:
ok
I0127 12:36:12.931482 532607 api_server.go:141] control plane version: v1.32.1
I0127 12:36:12.931504 532607 api_server.go:131] duration metric: took 8.421115ms to wait for apiserver health ...
I0127 12:36:12.931513 532607 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:36:13.073659 532607 system_pods.go:59] 9 kube-system pods found
I0127 12:36:13.073701 532607 system_pods.go:61] "coredns-668d6bf9bc-46nfk" [ca146154-7693-43e5-ae2a-f0c3148327b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:13.073712 532607 system_pods.go:61] "coredns-668d6bf9bc-9p64b" [4d44d79e-ea3d-4085-9fb2-356746e71e9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:13.073722 532607 system_pods.go:61] "etcd-embed-certs-346100" [cb00782a-b078-43ee-aa3f-4806aa7629d6] Running
I0127 12:36:13.073729 532607 system_pods.go:61] "kube-apiserver-embed-certs-346100" [7b0a8d77-4737-4bde-8e2a-2462c524f9a2] Running
I0127 12:36:13.073735 532607 system_pods.go:61] "kube-controller-manager-embed-certs-346100" [196254b2-812b-43a4-ae10-d55a11957faf] Running
I0127 12:36:13.073741 532607 system_pods.go:61] "kube-proxy-smp6l" [886c9cd4-795b-4e33-a16e-e12302c37665] Running
I0127 12:36:13.073746 532607 system_pods.go:61] "kube-scheduler-embed-certs-346100" [90cbc1fe-52a3-45d8-a8e9-edc60f5c4829] Running
I0127 12:36:13.073754 532607 system_pods.go:61] "metrics-server-f79f97bbb-w8fsn" [3a78ab43-37b0-4dc0-89a9-59a558ef997c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:36:13.073811 532607 system_pods.go:61] "storage-provisioner" [0d021617-8412-4f33-ba4f-2b3b458721ff] Running
I0127 12:36:13.073828 532607 system_pods.go:74] duration metric: took 142.306493ms to wait for pod list to return data ...
I0127 12:36:13.073848 532607 default_sa.go:34] waiting for default service account to be created ...
I0127 12:36:13.273298 532607 default_sa.go:45] found service account: "default"
I0127 12:36:13.273415 532607 default_sa.go:55] duration metric: took 199.555226ms for default service account to be created ...
I0127 12:36:13.273446 532607 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:36:13.477525 532607 system_pods.go:87] 9 kube-system pods found
I0127 12:36:17.101529 534894 main.go:141] libmachine: (newest-cni-610630) Calling .DriverName
I0127 12:36:17.101719 534894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:17.101731 534894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:36:17.101745 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:17.102276 534894 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 12:36:17.102295 534894 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 12:36:17.102329 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHHostname
I0127 12:36:17.102718 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.103291 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:17.103308 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.103462 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:17.103607 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:17.103729 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:17.103834 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:17.106885 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.107336 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:17.107361 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.107579 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.107585 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:17.107768 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:17.107957 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:17.108065 534894 main.go:141] libmachine: (newest-cni-610630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:61:34", ip: ""} in network mk-newest-cni-610630: {Iface:virbr3 ExpiryTime:2025-01-27 13:35:03 +0000 UTC Type:0 Mac:52:54:00:49:61:34 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:newest-cni-610630 Clientid:01:52:54:00:49:61:34}
I0127 12:36:17.108184 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:17.108305 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHPort
I0127 12:36:17.108457 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHKeyPath
I0127 12:36:17.108478 534894 main.go:141] libmachine: (newest-cni-610630) DBG | domain newest-cni-610630 has defined IP address 192.168.39.228 and MAC address 52:54:00:49:61:34 in network mk-newest-cni-610630
I0127 12:36:17.108587 534894 main.go:141] libmachine: (newest-cni-610630) Calling .GetSSHUsername
I0127 12:36:17.108674 534894 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/newest-cni-610630/id_rsa Username:docker}
I0127 12:36:17.319272 534894 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:36:17.355389 534894 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:36:17.355483 534894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:17.383883 534894 api_server.go:72] duration metric: took 368.528555ms to wait for apiserver process to appear ...
I0127 12:36:17.383915 534894 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:36:17.383940 534894 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
I0127 12:36:17.392047 534894 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
ok
I0127 12:36:17.393460 534894 api_server.go:141] control plane version: v1.32.1
I0127 12:36:17.393491 534894 api_server.go:131] duration metric: took 9.56764ms to wait for apiserver health ...
I0127 12:36:17.393503 534894 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:36:17.419483 534894 system_pods.go:59] 9 kube-system pods found
I0127 12:36:17.419523 534894 system_pods.go:61] "coredns-668d6bf9bc-n6hwn" [24d3582e-97d0-4bb8-b12a-6f69ecd72309] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:17.419533 534894 system_pods.go:61] "coredns-668d6bf9bc-vg4bb" [3d20d4a5-8ddf-4166-af63-47beab76d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 12:36:17.419543 534894 system_pods.go:61] "etcd-newest-cni-610630" [0a812f8b-1e38-49a2-be17-9db3fb1979db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 12:36:17.419550 534894 system_pods.go:61] "kube-apiserver-newest-cni-610630" [292fb9b9-ccfb-49b5-892d-078e5897981d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 12:36:17.419559 534894 system_pods.go:61] "kube-controller-manager-newest-cni-610630" [36d1c2e6-42ce-4d4a-8bba-ff7beb6551ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 12:36:17.419565 534894 system_pods.go:61] "kube-proxy-8szpt" [11487c0b-6afa-4c83-9eb8-6a9f609f7b58] Running
I0127 12:36:17.419574 534894 system_pods.go:61] "kube-scheduler-newest-cni-610630" [2f8744a9-54dc-4b5c-92f6-b0d3b9b0de7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 12:36:17.419582 534894 system_pods.go:61] "metrics-server-f79f97bbb-kcc5g" [6593df15-330d-4389-b878-45d396d718b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:36:17.419591 534894 system_pods.go:61] "storage-provisioner" [3cc7604e-fcbb-48e5-8445-82d5150b759f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 12:36:17.419601 534894 system_pods.go:74] duration metric: took 26.090469ms to wait for pod list to return data ...
I0127 12:36:17.419614 534894 default_sa.go:34] waiting for default service account to be created ...
I0127 12:36:17.422917 534894 default_sa.go:45] found service account: "default"
I0127 12:36:17.422941 534894 default_sa.go:55] duration metric: took 3.317044ms for default service account to be created ...
I0127 12:36:17.422956 534894 kubeadm.go:582] duration metric: took 407.606907ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 12:36:17.422975 534894 node_conditions.go:102] verifying NodePressure condition ...
I0127 12:36:17.429059 534894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 12:36:17.429091 534894 node_conditions.go:123] node cpu capacity is 2
I0127 12:36:17.429116 534894 node_conditions.go:105] duration metric: took 6.133766ms to run NodePressure ...
I0127 12:36:17.429138 534894 start.go:241] waiting for startup goroutines ...
I0127 12:36:17.493751 534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 12:36:17.493777 534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 12:36:17.496271 534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:17.540289 534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 12:36:17.540321 534894 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 12:36:17.595530 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 12:36:17.595565 534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 12:36:17.609027 534894 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:17.609055 534894 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 12:36:17.726024 534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:17.764459 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 12:36:17.764492 534894 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 12:36:17.764569 534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:17.852391 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 12:36:17.852429 534894 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 12:36:17.964392 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 12:36:17.964417 534894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 12:36:18.185418 532844 kubeadm.go:310] [api-check] The API server is healthy after 6.002059282s
I0127 12:36:18.204454 532844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 12:36:18.218201 532844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 12:36:18.245054 532844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 12:36:18.245331 532844 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-887672 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 12:36:18.257186 532844 kubeadm.go:310] [bootstrap-token] Using token: 5yhtlj.kyb5uzy41lrz34us
I0127 12:36:18.258581 532844 out.go:235] - Configuring RBAC rules ...
I0127 12:36:18.258747 532844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 12:36:18.265191 532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 12:36:18.272296 532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 12:36:18.285037 532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 12:36:18.285204 532844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 12:36:18.285313 532844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 12:36:18.593364 532844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 12:36:19.042942 532844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 12:36:19.593432 532844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 12:36:19.594797 532844 kubeadm.go:310]
I0127 12:36:19.594875 532844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 12:36:19.594888 532844 kubeadm.go:310]
I0127 12:36:19.594970 532844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 12:36:19.594981 532844 kubeadm.go:310]
I0127 12:36:19.595011 532844 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 12:36:19.595081 532844 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 12:36:19.595152 532844 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 12:36:19.595166 532844 kubeadm.go:310]
I0127 12:36:19.595239 532844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 12:36:19.595246 532844 kubeadm.go:310]
I0127 12:36:19.595301 532844 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 12:36:19.595308 532844 kubeadm.go:310]
I0127 12:36:19.595371 532844 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 12:36:19.595464 532844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 12:36:19.595545 532844 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 12:36:19.595554 532844 kubeadm.go:310]
I0127 12:36:19.595667 532844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 12:36:19.595757 532844 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 12:36:19.595767 532844 kubeadm.go:310]
I0127 12:36:19.595869 532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
I0127 12:36:19.595998 532844 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337 \
I0127 12:36:19.596017 532844 kubeadm.go:310] --control-plane
I0127 12:36:19.596021 532844 kubeadm.go:310]
I0127 12:36:19.596121 532844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 12:36:19.596137 532844 kubeadm.go:310]
I0127 12:36:19.596223 532844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5yhtlj.kyb5uzy41lrz34us \
I0127 12:36:19.596305 532844 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:90220a32b97a19780da5783028af42ec7db4be5a9f4d7ee30b4871ae76b3d337
I0127 12:36:19.598645 532844 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 12:36:19.598687 532844 cni.go:84] Creating CNI manager for ""
I0127 12:36:19.598696 532844 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 12:36:19.600188 532844 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:36:18.113709 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 12:36:18.113742 534894 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 12:36:18.153599 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 12:36:18.153635 534894 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 12:36:18.176500 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 12:36:18.176539 534894 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 12:36:18.216973 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 12:36:18.217007 534894 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 12:36:18.274511 534894 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:18.274583 534894 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 12:36:18.342333 534894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:18.361302 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:18.361342 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:18.361665 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:18.361699 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:18.361710 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:18.361719 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:18.362117 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:18.362140 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:18.362144 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:18.371041 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:18.371065 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:18.371339 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:18.371377 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:19.594328 534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868263184s)
I0127 12:36:19.594692 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:19.594482 534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.829887156s)
I0127 12:36:19.594790 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:19.594804 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:19.595140 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:19.595208 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:19.595219 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:19.595238 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:19.595247 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:19.595556 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:19.595579 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:19.595600 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:19.595618 534894 addons.go:479] Verifying addon metrics-server=true in "newest-cni-610630"
I0127 12:36:19.596388 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:19.596722 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:19.596754 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:19.596763 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:19.596770 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:19.597063 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:19.597086 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:19.597098 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:20.095246 534894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.752863121s)
I0127 12:36:20.095306 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:20.095324 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:20.095623 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:20.095685 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:20.095695 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:20.095711 534894 main.go:141] libmachine: Making call to close driver server
I0127 12:36:20.095721 534894 main.go:141] libmachine: (newest-cni-610630) Calling .Close
I0127 12:36:20.096021 534894 main.go:141] libmachine: (newest-cni-610630) DBG | Closing plugin on server side
I0127 12:36:20.096038 534894 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:20.096055 534894 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:20.097482 534894 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-610630 addons enable metrics-server
I0127 12:36:20.098730 534894 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0127 12:36:20.099860 534894 addons.go:514] duration metric: took 3.084737287s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0127 12:36:20.099913 534894 start.go:246] waiting for cluster config update ...
I0127 12:36:20.099934 534894 start.go:255] writing updated cluster config ...
I0127 12:36:20.100260 534894 ssh_runner.go:195] Run: rm -f paused
I0127 12:36:20.153018 534894 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 12:36:20.154413 534894 out.go:177] * Done! kubectl is now configured to use "newest-cni-610630" cluster and "default" namespace by default
I0127 12:36:19.601391 532844 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:36:19.615483 532844 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:36:19.641045 532844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:36:19.641123 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:19.641161 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-887672 minikube.k8s.io/updated_at=2025_01_27T12_36_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=default-k8s-diff-port-887672 minikube.k8s.io/primary=true
I0127 12:36:19.655315 532844 ops.go:34] apiserver oom_adj: -16
I0127 12:36:19.893685 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:20.394472 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:20.893933 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:21.394823 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:21.893992 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:22.393950 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:22.894084 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:23.394506 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:23.893909 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:24.393790 532844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:36:24.491305 532844 kubeadm.go:1113] duration metric: took 4.850249048s to wait for elevateKubeSystemPrivileges
I0127 12:36:24.491356 532844 kubeadm.go:394] duration metric: took 4m37.901720321s to StartCluster
I0127 12:36:24.491385 532844 settings.go:142] acquiring lock: {Name:mkc626b99c5f2ef89a002643cb7e51a3cbdf8ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:24.491488 532844 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20318-471120/kubeconfig
I0127 12:36:24.493752 532844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-471120/kubeconfig: {Name:mk452cc8a4801513f9fb799655fd8ea78318fe87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:36:24.494040 532844 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 12:36:24.494175 532844 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:36:24.494273 532844 config.go:182] Loaded profile config "default-k8s-diff-port-887672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 12:36:24.494285 532844 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-887672"
I0127 12:36:24.494323 532844 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-887672"
I0127 12:36:24.494316 532844 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-887672"
I0127 12:36:24.494338 532844 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-887672"
I0127 12:36:24.494372 532844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-887672"
I0127 12:36:24.494381 532844 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-887672"
W0127 12:36:24.494394 532844 addons.go:247] addon dashboard should already be in state true
W0127 12:36:24.494332 532844 addons.go:247] addon storage-provisioner should already be in state true
I0127 12:36:24.494432 532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
I0127 12:36:24.494463 532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
I0127 12:36:24.494323 532844 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-887672"
I0127 12:36:24.494553 532844 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-887672"
W0127 12:36:24.494564 532844 addons.go:247] addon metrics-server should already be in state true
I0127 12:36:24.494598 532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
I0127 12:36:24.494863 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.494871 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.494871 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.494905 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.494911 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.495037 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.495049 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.495123 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.495481 532844 out.go:177] * Verifying Kubernetes components...
I0127 12:36:24.496811 532844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:36:24.513577 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
I0127 12:36:24.514115 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.514694 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.514720 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.515161 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.515484 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
I0127 12:36:24.515836 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
I0127 12:36:24.515999 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
I0127 12:36:24.516094 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.516144 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.516192 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.516413 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.516675 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.516695 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.516974 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.516994 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.517001 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.517393 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.517583 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
I0127 12:36:24.517647 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.518197 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.518252 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.518469 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.518494 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.518868 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.519422 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.519470 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.521629 532844 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-887672"
W0127 12:36:24.521653 532844 addons.go:247] addon default-storageclass should already be in state true
I0127 12:36:24.521684 532844 host.go:66] Checking if "default-k8s-diff-port-887672" exists ...
I0127 12:36:24.522040 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.522081 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.534712 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
I0127 12:36:24.535195 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.536504 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.536527 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.536554 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
I0127 12:36:24.536902 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.536959 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.537111 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
I0127 12:36:24.537597 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.537616 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.537969 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.538145 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
I0127 12:36:24.538989 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
I0127 12:36:24.539580 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.540009 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
I0127 12:36:24.540196 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
I0127 12:36:24.540422 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.540715 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
I0127 12:36:24.540879 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.540902 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.540934 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.540948 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.541341 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.541388 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.541685 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
I0127 12:36:24.542042 532844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 12:36:24.542090 532844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:36:24.542251 532844 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 12:36:24.542373 532844 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 12:36:24.543206 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
I0127 12:36:24.543412 532844 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 12:36:24.543430 532844 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 12:36:24.543460 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
I0127 12:36:24.544493 532844 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:36:24.545545 532844 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 12:36:24.545643 532844 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:24.545656 532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:36:24.545671 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
I0127 12:36:24.546541 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 12:36:24.546563 532844 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 12:36:24.546584 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
I0127 12:36:24.547093 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.547276 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
I0127 12:36:24.547478 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
I0127 12:36:24.547900 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
I0127 12:36:24.548065 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.547944 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
I0127 12:36:24.548278 532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
I0127 12:36:24.549918 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.550146 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
I0127 12:36:24.550170 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.550429 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
I0127 12:36:24.550517 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.550608 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
I0127 12:36:24.550758 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
I0127 12:36:24.550914 532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
I0127 12:36:24.550956 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
I0127 12:36:24.550993 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.551165 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
I0127 12:36:24.551308 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
I0127 12:36:24.551460 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
I0127 12:36:24.551595 532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
I0127 12:36:24.566621 532844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
I0127 12:36:24.567007 532844 main.go:141] libmachine: () Calling .GetVersion
I0127 12:36:24.567434 532844 main.go:141] libmachine: Using API Version 1
I0127 12:36:24.567460 532844 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:36:24.567879 532844 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:36:24.568040 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetState
I0127 12:36:24.569632 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .DriverName
I0127 12:36:24.569844 532844 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:24.569859 532844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:36:24.569875 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHHostname
I0127 12:36:24.572937 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.573361 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:54:e1", ip: ""} in network mk-default-k8s-diff-port-887672: {Iface:virbr2 ExpiryTime:2025-01-27 13:31:34 +0000 UTC Type:0 Mac:52:54:00:65:54:e1 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:default-k8s-diff-port-887672 Clientid:01:52:54:00:65:54:e1}
I0127 12:36:24.573377 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | domain default-k8s-diff-port-887672 has defined IP address 192.168.61.130 and MAC address 52:54:00:65:54:e1 in network mk-default-k8s-diff-port-887672
I0127 12:36:24.573577 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHPort
I0127 12:36:24.573757 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHKeyPath
I0127 12:36:24.573888 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .GetSSHUsername
I0127 12:36:24.574044 532844 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20318-471120/.minikube/machines/default-k8s-diff-port-887672/id_rsa Username:docker}
I0127 12:36:24.747290 532844 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:36:24.779846 532844 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-887672" to be "Ready" ...
I0127 12:36:24.813551 532844 node_ready.go:49] node "default-k8s-diff-port-887672" has status "Ready":"True"
I0127 12:36:24.813582 532844 node_ready.go:38] duration metric: took 33.68566ms for node "default-k8s-diff-port-887672" to be "Ready" ...
I0127 12:36:24.813594 532844 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:36:24.825398 532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
I0127 12:36:24.855841 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 12:36:24.855869 532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 12:36:24.865288 532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:36:24.890399 532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:36:24.907963 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 12:36:24.907990 532844 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 12:36:24.923409 532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 12:36:24.923434 532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 12:36:24.967186 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 12:36:24.967211 532844 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 12:36:25.003133 532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 12:36:25.003167 532844 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 12:36:25.031491 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 12:36:25.031515 532844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 12:36:25.086171 532844 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:25.086201 532844 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 12:36:25.147825 532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 12:36:25.152298 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 12:36:25.152324 532844 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 12:36:25.203235 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 12:36:25.203264 532844 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 12:36:25.242547 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 12:36:25.242578 532844 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 12:36:25.281622 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 12:36:25.281659 532844 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 12:36:25.312416 532844 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:25.312444 532844 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 12:36:25.365802 532844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 12:36:25.651534 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.651566 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.651590 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.651612 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.651995 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
I0127 12:36:25.652009 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.652020 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
I0127 12:36:25.652021 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.652033 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.652036 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.652040 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.652047 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.652055 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.652063 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.652511 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
I0127 12:36:25.652572 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.652594 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.652580 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
I0127 12:36:25.652592 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.652796 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.667377 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.667403 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.667693 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.667709 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.974214 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.974246 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.974553 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.974574 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.974591 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:25.974600 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:25.974992 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:25.975017 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:25.975032 532844 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-887672"
I0127 12:36:26.960702 532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:27.097489 532844 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.731632212s)
I0127 12:36:27.097551 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:27.097567 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:27.097886 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:27.097909 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:27.097909 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) DBG | Closing plugin on server side
I0127 12:36:27.097917 532844 main.go:141] libmachine: Making call to close driver server
I0127 12:36:27.097935 532844 main.go:141] libmachine: (default-k8s-diff-port-887672) Calling .Close
I0127 12:36:27.098221 532844 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:36:27.098291 532844 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:36:27.099837 532844 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-887672 addons enable metrics-server
I0127 12:36:27.101354 532844 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 12:36:27.102395 532844 addons.go:514] duration metric: took 2.608238219s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 12:36:29.331790 532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:31.334726 532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:33.834237 532844 pod_ready.go:103] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"False"
I0127 12:36:34.374688 532844 pod_ready.go:93] pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.374713 532844 pod_ready.go:82] duration metric: took 9.549290033s for pod "coredns-668d6bf9bc-jc882" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.374725 532844 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.399299 532844 pod_ready.go:93] pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.399323 532844 pod_ready.go:82] duration metric: took 24.589743ms for pod "coredns-668d6bf9bc-s6rln" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.399332 532844 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.421329 532844 pod_ready.go:93] pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.421359 532844 pod_ready.go:82] duration metric: took 22.019877ms for pod "etcd-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.421399 532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.427922 532844 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.427946 532844 pod_ready.go:82] duration metric: took 6.537775ms for pod "kube-apiserver-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.427957 532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.447675 532844 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.447701 532844 pod_ready.go:82] duration metric: took 19.736139ms for pod "kube-controller-manager-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.447713 532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.729783 532844 pod_ready.go:93] pod "kube-proxy-xl46c" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:34.729827 532844 pod_ready.go:82] duration metric: took 282.092476ms for pod "kube-proxy-xl46c" in "kube-system" namespace to be "Ready" ...
I0127 12:36:34.729841 532844 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:35.128755 532844 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace has status "Ready":"True"
I0127 12:36:35.128781 532844 pod_ready.go:82] duration metric: took 398.931642ms for pod "kube-scheduler-default-k8s-diff-port-887672" in "kube-system" namespace to be "Ready" ...
I0127 12:36:35.128790 532844 pod_ready.go:39] duration metric: took 10.315186396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:36:35.128806 532844 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:36:35.128870 532844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:36:35.148548 532844 api_server.go:72] duration metric: took 10.654456335s to wait for apiserver process to appear ...
I0127 12:36:35.148574 532844 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:36:35.148597 532844 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8444/healthz ...
I0127 12:36:35.156175 532844 api_server.go:279] https://192.168.61.130:8444/healthz returned 200:
ok
I0127 12:36:35.157842 532844 api_server.go:141] control plane version: v1.32.1
I0127 12:36:35.157866 532844 api_server.go:131] duration metric: took 9.283401ms to wait for apiserver health ...
I0127 12:36:35.157875 532844 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:36:35.339567 532844 system_pods.go:59] 9 kube-system pods found
I0127 12:36:35.339606 532844 system_pods.go:61] "coredns-668d6bf9bc-jc882" [cc7b1851-f0b2-406d-b972-155b02dcefc6] Running
I0127 12:36:35.339614 532844 system_pods.go:61] "coredns-668d6bf9bc-s6rln" [553e1b5c-1bb3-48f4-bf25-6837dae6b627] Running
I0127 12:36:35.339620 532844 system_pods.go:61] "etcd-default-k8s-diff-port-887672" [cfe71b01-c4c5-4772-904f-0f22ebdc9481] Running
I0127 12:36:35.339625 532844 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-887672" [09952f8b-2235-45c2-aac8-328369a341dd] Running
I0127 12:36:35.339631 532844 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-887672" [6aee732f-0e4f-4362-b2d5-38e533a146c4] Running
I0127 12:36:35.339636 532844 system_pods.go:61] "kube-proxy-xl46c" [c2ddd14b-3d9e-4985-935e-5f64d188e68e] Running
I0127 12:36:35.339641 532844 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-887672" [7a436b79-cc6a-4311-9cb6-24537ed6aed0] Running
I0127 12:36:35.339652 532844 system_pods.go:61] "metrics-server-f79f97bbb-twqz4" [107a2af6-937d-4c95-a8dd-f47f59dd3afb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 12:36:35.339659 532844 system_pods.go:61] "storage-provisioner" [ebd493f5-ab93-4083-8174-aceb44741e99] Running
I0127 12:36:35.339675 532844 system_pods.go:74] duration metric: took 181.791009ms to wait for pod list to return data ...
I0127 12:36:35.339689 532844 default_sa.go:34] waiting for default service account to be created ...
I0127 12:36:35.528977 532844 default_sa.go:45] found service account: "default"
I0127 12:36:35.529018 532844 default_sa.go:55] duration metric: took 189.31757ms for default service account to be created ...
I0127 12:36:35.529033 532844 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:36:35.732388 532844 system_pods.go:87] 9 kube-system pods found
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
3ceaaa73498cf 523cad1a4df73 34 seconds ago Exited dashboard-metrics-scraper 9 0573e71b6e2a1 dashboard-metrics-scraper-86c6bf9756-kd8j9
4b65326b3a3c3 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 ca768cf27c29d kubernetes-dashboard-7779f9b69b-4vdvf
dc2d31b650f7f 6e38f40d628db 21 minutes ago Running storage-provisioner 0 4d79d92112052 storage-provisioner
e204bce6ab533 c69fa2e9cbf5f 21 minutes ago Running coredns 0 ebc54e95eb844 coredns-668d6bf9bc-wwb9p
6a071a9d5905b c69fa2e9cbf5f 21 minutes ago Running coredns 0 28aa601e02f72 coredns-668d6bf9bc-v9stn
22d83b17aba0d e29f9c7391fd9 21 minutes ago Running kube-proxy 0 7286d10309151 kube-proxy-bbnm2
b3e3a512c59dc a9e7e6b294baf 21 minutes ago Running etcd 2 1375b8aa414ea etcd-no-preload-215237
da65aa22e920d 2b0d6572d062c 21 minutes ago Running kube-scheduler 2 6eec9ecbf79af kube-scheduler-no-preload-215237
41ac70a4bacec 019ee182b58e2 21 minutes ago Running kube-controller-manager 2 d6b3b59aaa35c kube-controller-manager-no-preload-215237
95aa57ca824e9 95c0bda56fc4d 21 minutes ago Running kube-apiserver 2 53ffb55d3c5e4 kube-apiserver-no-preload-215237
==> containerd <==
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.506540428Z" level=info msg="StartContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\" returns successfully"
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551103428Z" level=info msg="shim disconnected" id=ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383 namespace=k8s.io
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551240535Z" level=warning msg="cleaning up after shim disconnected" id=ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383 namespace=k8s.io
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551361180Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.551571153Z" level=error msg="collecting metrics for ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383" error="ttrpc: closed: unknown"
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.568298936Z" level=warning msg="cleanup warnings time=\"2025-01-27T12:51:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.785024552Z" level=info msg="RemoveContainer for \"40e4bd940c7e40cf969b1dc3a54c32be8e002e8159e3f01c49725e3b27dc4cac\""
Jan 27 12:51:37 no-preload-215237 containerd[562]: time="2025-01-27T12:51:37.791916163Z" level=info msg="RemoveContainer for \"40e4bd940c7e40cf969b1dc3a54c32be8e002e8159e3f01c49725e3b27dc4cac\" returns successfully"
Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.409506279Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.419140195Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.421212040Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 12:51:48 no-preload-215237 containerd[562]: time="2025-01-27T12:51:48.421431332Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.411523061Z" level=info msg="CreateContainer within sandbox \"0573e71b6e2a1421d0e3e5116b4f8b6c50a4b1d8ea3371d33246ede8628de50e\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.433556422Z" level=info msg="CreateContainer within sandbox \"0573e71b6e2a1421d0e3e5116b4f8b6c50a4b1d8ea3371d33246ede8628de50e\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\""
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.434538985Z" level=info msg="StartContainer for \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\""
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.501948496Z" level=info msg="StartContainer for \"3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c\" returns successfully"
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543428109Z" level=info msg="shim disconnected" id=3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c namespace=k8s.io
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543542819Z" level=warning msg="cleaning up after shim disconnected" id=3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c namespace=k8s.io
Jan 27 12:56:41 no-preload-215237 containerd[562]: time="2025-01-27T12:56:41.543620521Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 12:56:42 no-preload-215237 containerd[562]: time="2025-01-27T12:56:42.486662703Z" level=info msg="RemoveContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\""
Jan 27 12:56:42 no-preload-215237 containerd[562]: time="2025-01-27T12:56:42.494590344Z" level=info msg="RemoveContainer for \"ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383\" returns successfully"
Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.409235664Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.418944317Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.420533982Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 12:57:02 no-preload-215237 containerd[562]: time="2025-01-27T12:57:02.420593937Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [6a071a9d5905bd462eb5828e287847c360395e9bdf44b10604521331ed76dc38] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [e204bce6ab533b2ab5f3991efb9bf4c39b985dfdfcda79400757ae9cc2b16401] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: no-preload-215237
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-215237
kubernetes.io/os=linux
minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
minikube.k8s.io/name=no-preload-215237
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T12_35_36_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 12:35:32 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: no-preload-215237
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 12:57:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 12:55:29 +0000 Mon, 27 Jan 2025 12:35:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 12:55:29 +0000 Mon, 27 Jan 2025 12:35:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 12:55:29 +0000 Mon, 27 Jan 2025 12:35:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 12:55:29 +0000 Mon, 27 Jan 2025 12:35:33 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.72.127
Hostname: no-preload-215237
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 9ae0e5191349457197e5e70ea74d2584
System UUID: 9ae0e519-1349-4571-97e5-e70ea74d2584
Boot ID: 87718bc9-62ae-4833-b9af-6d0031a85e3e
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-v9stn 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system coredns-668d6bf9bc-wwb9p 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system etcd-no-preload-215237 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 21m
kube-system kube-apiserver-no-preload-215237 250m (12%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-controller-manager-no-preload-215237 200m (10%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-proxy-bbnm2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-scheduler-no-preload-215237 100m (5%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system metrics-server-f79f97bbb-lqck5 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-kd8j9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-4vdvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21m kube-proxy
Normal NodeHasSufficientMemory 21m (x8 over 21m) kubelet Node no-preload-215237 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m (x8 over 21m) kubelet Node no-preload-215237 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m (x7 over 21m) kubelet Node no-preload-215237 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal Starting 21m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m kubelet Node no-preload-215237 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m kubelet Node no-preload-215237 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m kubelet Node no-preload-215237 status is now: NodeHasSufficientPID
Normal RegisteredNode 21m node-controller Node no-preload-215237 event: Registered Node no-preload-215237 in Controller
==> dmesg <==
[ +0.038094] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.819415] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.019120] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.558800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Jan27 12:31] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
[ +0.058765] kauditd_printk_skb: 1 callbacks suppressed
[ +0.046408] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
[ +0.155095] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
[ +0.139464] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
[ +0.280508] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
[ +1.713990] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
[ +1.806706] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
[ +0.853658] kauditd_printk_skb: 225 callbacks suppressed
[ +5.037897] kauditd_printk_skb: 50 callbacks suppressed
[ +11.338835] kauditd_printk_skb: 71 callbacks suppressed
[Jan27 12:35] systemd-fstab-generator[3024]: Ignoring "noauto" option for root device
[ +6.061721] systemd-fstab-generator[3397]: Ignoring "noauto" option for root device
[ +0.105541] kauditd_printk_skb: 87 callbacks suppressed
[ +5.139082] kauditd_printk_skb: 12 callbacks suppressed
[ +0.304385] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
[ +5.315069] kauditd_printk_skb: 112 callbacks suppressed
[ +7.866581] kauditd_printk_skb: 1 callbacks suppressed
[Jan27 12:36] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [b3e3a512c59dcf9411744c4bac1c26316107acd555b51dc8f450d5bb4237410d] <==
{"level":"info","ts":"2025-01-27T12:35:31.070363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T12:35:31.071217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T12:35:31.076095Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T12:35:31.084228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.127:2379"}
{"level":"info","ts":"2025-01-27T12:35:31.088943Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T12:35:31.080713Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-01-27T12:35:31.089415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-01-27T12:35:31.083432Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2b0928dff5fc0b2","local-member-id":"aed9602068d4a4e0","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:35:31.089992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:35:31.092430Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:35:31.097719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-01-27T12:35:47.689584Z","caller":"traceutil/trace.go:171","msg":"trace[1288856585] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"123.552333ms","start":"2025-01-27T12:35:47.565958Z","end":"2025-01-27T12:35:47.689510Z","steps":["trace[1288856585] 'process raft request' (duration: 123.456666ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:35:49.305994Z","caller":"traceutil/trace.go:171","msg":"trace[1251088634] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"100.461726ms","start":"2025-01-27T12:35:49.205505Z","end":"2025-01-27T12:35:49.305967Z","steps":["trace[1251088634] 'process raft request' (duration: 99.751683ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:35:53.452191Z","caller":"traceutil/trace.go:171","msg":"trace[1851918262] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"114.283914ms","start":"2025-01-27T12:35:53.337879Z","end":"2025-01-27T12:35:53.452163Z","steps":["trace[1851918262] 'process raft request' (duration: 113.060063ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:36:32.874740Z","caller":"traceutil/trace.go:171","msg":"trace[557556585] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"136.24693ms","start":"2025-01-27T12:36:32.738464Z","end":"2025-01-27T12:36:32.874711Z","steps":["trace[557556585] 'process raft request' (duration: 136.053337ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:36:32.879496Z","caller":"traceutil/trace.go:171","msg":"trace[1453627606] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"132.561219ms","start":"2025-01-27T12:36:32.746916Z","end":"2025-01-27T12:36:32.879478Z","steps":["trace[1453627606] 'process raft request' (duration: 132.021847ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:45:31.447034Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":869}
{"level":"info","ts":"2025-01-27T12:45:31.488475Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":869,"took":"40.382605ms","hash":2608631459,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2920448,"current-db-size-in-use":"2.9 MB"}
{"level":"info","ts":"2025-01-27T12:45:31.488704Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2608631459,"revision":869,"compact-revision":-1}
{"level":"info","ts":"2025-01-27T12:50:31.454455Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1120}
{"level":"info","ts":"2025-01-27T12:50:31.459411Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1120,"took":"4.280134ms","hash":1951365307,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1728512,"current-db-size-in-use":"1.7 MB"}
{"level":"info","ts":"2025-01-27T12:50:31.459474Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1951365307,"revision":1120,"compact-revision":869}
{"level":"info","ts":"2025-01-27T12:55:31.464047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1372}
{"level":"info","ts":"2025-01-27T12:55:31.469871Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1372,"took":"4.593062ms","hash":4111361700,"current-db-size-bytes":2920448,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T12:55:31.469971Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4111361700,"revision":1372,"compact-revision":1120}
==> kernel <==
12:57:15 up 26 min, 0 users, load average: 0.09, 0.11, 0.09
Linux no-preload-215237 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [95aa57ca824e97918bf0d2b243865c20f33b1c15de12407fc1b20ba49b450296] <==
I0127 12:53:33.932968 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 12:53:33.933021 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 12:55:32.931531 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:55:32.931762 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 12:55:33.933996 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:55:33.934154 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 12:55:33.934040 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:55:33.934294 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I0127 12:55:33.935596 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 12:55:33.935632 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 12:56:33.936704 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:56:33.937084 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 12:56:33.937253 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:56:33.937323 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I0127 12:56:33.938298 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 12:56:33.938502 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [41ac70a4bacec6104f094ac80a9205904f6f390bee466b2a7f3baa56d349f7ff] <==
I0127 12:52:09.747484 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 12:52:14.422176 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="136.591µs"
E0127 12:52:39.699955 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:52:39.754247 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:53:09.706535 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:53:09.761805 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:53:39.713157 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:53:39.771001 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:54:09.720146 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:54:09.778901 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:54:39.727498 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:54:39.786482 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:55:09.734966 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:55:09.795665 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 12:55:29.263823 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-215237"
E0127 12:55:39.741376 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:55:39.802752 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:56:09.748042 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:56:09.809143 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:56:39.755134 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:56:39.816685 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 12:56:42.504601 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="1.171328ms"
I0127 12:56:48.049166 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="85.901µs"
E0127 12:57:09.761583 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:57:09.823165 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [22d83b17aba0d308213867c1019db87a7dcd2fb74c0992663a062867e498094b] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 12:35:40.803179 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 12:35:40.815043 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.127"]
E0127 12:35:40.815116 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 12:35:40.912808 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 12:35:40.912846 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 12:35:40.912867 1 server_linux.go:170] "Using iptables Proxier"
I0127 12:35:40.916261 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 12:35:40.916860 1 server.go:497] "Version info" version="v1.32.1"
I0127 12:35:40.916894 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 12:35:40.918730 1 config.go:199] "Starting service config controller"
I0127 12:35:40.918778 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 12:35:40.918801 1 config.go:105] "Starting endpoint slice config controller"
I0127 12:35:40.918819 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 12:35:40.923786 1 config.go:329] "Starting node config controller"
I0127 12:35:40.923799 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 12:35:41.019353 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 12:35:41.019396 1 shared_informer.go:320] Caches are synced for service config
I0127 12:35:41.026373 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [da65aa22e920d0c8384e67aacfe137551310ab4661c3159a4babe77fa7cdacf3] <==
W0127 12:35:33.849575 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0127 12:35:33.849640 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:35:33.866527 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 12:35:33.866834 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 12:35:33.889220 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 12:35:33.889531 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:35:33.899656 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 12:35:33.900089 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:35:33.910481 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0127 12:35:33.910523 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:35:33.924056 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 12:35:33.924147 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0127 12:35:34.082107 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 12:35:34.082171 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:35:34.086192 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 12:35:34.086245 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:35:34.128167 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 12:35:34.128677 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:35:34.177503 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 12:35:34.177572 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:35:34.272348 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0127 12:35:34.272421 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0127 12:35:34.289501 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 12:35:34.289553 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0127 12:35:35.921663 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 12:56:12 no-preload-215237 kubelet[3404]: E0127 12:56:12.408710 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
Jan 27 12:56:19 no-preload-215237 kubelet[3404]: E0127 12:56:19.411446 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
Jan 27 12:56:26 no-preload-215237 kubelet[3404]: I0127 12:56:26.407996 3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
Jan 27 12:56:26 no-preload-215237 kubelet[3404]: E0127 12:56:26.408635 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
Jan 27 12:56:34 no-preload-215237 kubelet[3404]: E0127 12:56:34.409531 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
Jan 27 12:56:35 no-preload-215237 kubelet[3404]: E0127 12:56:35.427562 3404 iptables.go:577] "Could not set up iptables canary" err=<
Jan 27 12:56:35 no-preload-215237 kubelet[3404]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 27 12:56:35 no-preload-215237 kubelet[3404]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 27 12:56:35 no-preload-215237 kubelet[3404]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 27 12:56:35 no-preload-215237 kubelet[3404]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 27 12:56:41 no-preload-215237 kubelet[3404]: I0127 12:56:41.408787 3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
Jan 27 12:56:42 no-preload-215237 kubelet[3404]: I0127 12:56:42.483701 3404 scope.go:117] "RemoveContainer" containerID="ac1c43ff6b1b0874b35d10d81d6bf1abcb2072868dc4b0513eeb5239680c4383"
Jan 27 12:56:42 no-preload-215237 kubelet[3404]: I0127 12:56:42.484368 3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
Jan 27 12:56:42 no-preload-215237 kubelet[3404]: E0127 12:56:42.484596 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
Jan 27 12:56:48 no-preload-215237 kubelet[3404]: I0127 12:56:48.031649 3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
Jan 27 12:56:48 no-preload-215237 kubelet[3404]: E0127 12:56:48.031952 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
Jan 27 12:56:48 no-preload-215237 kubelet[3404]: E0127 12:56:48.409021 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
Jan 27 12:57:00 no-preload-215237 kubelet[3404]: I0127 12:57:00.408211 3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
Jan 27 12:57:00 no-preload-215237 kubelet[3404]: E0127 12:57:00.408580 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.420922 3404 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.421390 3404 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.421771 3404 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-lqck5_kube-system(3447c2da-cbb0-412c-a8d9-2be32c8e6dad): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 27 12:57:02 no-preload-215237 kubelet[3404]: E0127 12:57:02.423194 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-lqck5" podUID="3447c2da-cbb0-412c-a8d9-2be32c8e6dad"
Jan 27 12:57:14 no-preload-215237 kubelet[3404]: I0127 12:57:14.407961 3404 scope.go:117] "RemoveContainer" containerID="3ceaaa73498cf506a3914c9bbe41dcc3275dbafeaf93a2bfda389ee7406f8f4c"
Jan 27 12:57:14 no-preload-215237 kubelet[3404]: E0127 12:57:14.408592 3404 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kd8j9_kubernetes-dashboard(4bea5aec-3ec2-4ad9-b985-19376000e8b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kd8j9" podUID="4bea5aec-3ec2-4ad9-b985-19376000e8b9"
==> kubernetes-dashboard [4b65326b3a3c311cd62ce540884e41956b6fbed40d4755dbbf0bff3c4de481fd] <==
2025/01/27 12:44:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:45:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:45:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:46:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:49:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:49:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:50:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:51:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:51:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:52:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:52:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:54:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:55:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:55:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:56:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:56:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [dc2d31b650f7fa67ecb57ffc495e5d3fe523cd58f39ed357acc14aed652476d0] <==
I0127 12:35:43.036174 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 12:35:43.097982 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 12:35:43.098250 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 12:35:43.129850 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 12:35:43.130155 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85!
I0127 12:35:43.131603 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58db0c48-2dc1-4940-89af-b87e3848859b", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85 became leader
I0127 12:35:43.231514 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-215237_073fd1a8-0a3b-49a1-b9f9-7c7d9f226e85!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215237 -n no-preload-215237
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-215237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-lqck5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5: exit status 1 (61.464991ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-lqck5" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-215237 describe pod metrics-server-f79f97bbb-lqck5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1596.02s)