=== RUN TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 13:23:39.253012 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/addons-728052/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:41.820414 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/kindnet-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.071421 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.077802 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.089251 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.110738 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.152202 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.233788 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.395389 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:48.716697 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:49.358533 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:50.640652 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:53.202827 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:58.324550 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:24:08.566619 474275 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/calico-744060/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m0.722930283s)
-- stdout --
* [no-preload-325431] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20317
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "no-preload-325431" primary control-plane node in "no-preload-325431" cluster
* Restarting existing kvm2 VM for "no-preload-325431" ...
* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-325431 addons enable metrics-server
* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
-- /stdout --
** stderr **
I0127 13:23:32.645876 528954 out.go:345] Setting OutFile to fd 1 ...
I0127 13:23:32.645988 528954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:23:32.645996 528954 out.go:358] Setting ErrFile to fd 2...
I0127 13:23:32.646000 528954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:23:32.646190 528954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 13:23:32.646741 528954 out.go:352] Setting JSON to false
I0127 13:23:32.647782 528954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36310,"bootTime":1737947903,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 13:23:32.647910 528954 start.go:139] virtualization: kvm guest
I0127 13:23:32.649979 528954 out.go:177] * [no-preload-325431] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 13:23:32.651448 528954 notify.go:220] Checking for updates...
I0127 13:23:32.651473 528954 out.go:177] - MINIKUBE_LOCATION=20317
I0127 13:23:32.652842 528954 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 13:23:32.654268 528954 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:23:32.655537 528954 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
I0127 13:23:32.656759 528954 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 13:23:32.658425 528954 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 13:23:32.659954 528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:23:32.660327 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:23:32.660378 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:23:32.675724 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
I0127 13:23:32.676252 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:23:32.676865 528954 main.go:141] libmachine: Using API Version 1
I0127 13:23:32.676893 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:23:32.677259 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:23:32.677474 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:32.677782 528954 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 13:23:32.678237 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:23:32.678291 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:23:32.693444 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
I0127 13:23:32.693854 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:23:32.694326 528954 main.go:141] libmachine: Using API Version 1
I0127 13:23:32.694352 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:23:32.694639 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:23:32.694840 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:32.732796 528954 out.go:177] * Using the kvm2 driver based on existing profile
I0127 13:23:32.733939 528954 start.go:297] selected driver: kvm2
I0127 13:23:32.733954 528954 start.go:901] validating driver "kvm2" against &{Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:23:32.734098 528954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 13:23:32.734776 528954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.734884 528954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 13:23:32.750482 528954 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 13:23:32.751028 528954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 13:23:32.751081 528954 cni.go:84] Creating CNI manager for ""
I0127 13:23:32.751165 528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:23:32.751218 528954 start.go:340] cluster config:
{Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:23:32.751414 528954 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.754267 528954 out.go:177] * Starting "no-preload-325431" primary control-plane node in "no-preload-325431" cluster
I0127 13:23:32.755613 528954 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:23:32.755730 528954 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/config.json ...
I0127 13:23:32.755878 528954 cache.go:107] acquiring lock: {Name:mk0425a032ced4bdea57fd149bd1003ccc819b8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.755874 528954 cache.go:107] acquiring lock: {Name:mkf1e2d7a48534619b32d5198ef9090e83eaab37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.755954 528954 cache.go:107] acquiring lock: {Name:mk39b81bdcfa7d1829955b77cfed02c1a3ca582a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.755981 528954 start.go:360] acquireMachinesLock for no-preload-325431: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 13:23:32.755957 528954 cache.go:107] acquiring lock: {Name:mk7cd8ee4a354ebea291b7a031d037adad6f4eab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.756005 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0127 13:23:32.755996 528954 cache.go:107] acquiring lock: {Name:mk79d5d01647144335c1aa4441c0442e89aa5919 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.756016 528954 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 153.625µs
I0127 13:23:32.756043 528954 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0127 13:23:32.756020 528954 cache.go:107] acquiring lock: {Name:mkd93d04192eff91f8bfaec9535df9aa96f61b81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.756051 528954 start.go:364] duration metric: took 45.81µs to acquireMachinesLock for "no-preload-325431"
I0127 13:23:32.756059 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0127 13:23:32.756075 528954 start.go:96] Skipping create...Using existing machine configuration
I0127 13:23:32.756044 528954 cache.go:107] acquiring lock: {Name:mk47162761e1a477778394895affb07de499ad0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.756083 528954 fix.go:54] fixHost starting:
I0127 13:23:32.756077 528954 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 124.067µs
I0127 13:23:32.756124 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
I0127 13:23:32.756130 528954 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0127 13:23:32.756051 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
I0127 13:23:32.756160 528954 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 163.897µs
I0127 13:23:32.756176 528954 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
I0127 13:23:32.756157 528954 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 278.711µs
I0127 13:23:32.756184 528954 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
I0127 13:23:32.756086 528954 cache.go:107] acquiring lock: {Name:mk56c8495b1b67a68bdb2cfb60d162b3dad1956a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:23:32.756185 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
I0127 13:23:32.756203 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
I0127 13:23:32.756225 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
I0127 13:23:32.756208 528954 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 221.537µs
I0127 13:23:32.756223 528954 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 225.392µs
I0127 13:23:32.756252 528954 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
I0127 13:23:32.756238 528954 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
I0127 13:23:32.756047 528954 cache.go:115] /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
I0127 13:23:32.756274 528954 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 333.138µs
I0127 13:23:32.756284 528954 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0127 13:23:32.756237 528954 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 218.675µs
I0127 13:23:32.756297 528954 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20317-466901/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
I0127 13:23:32.756308 528954 cache.go:87] Successfully saved all images to host disk.
I0127 13:23:32.756447 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:23:32.756496 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:23:32.771438 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
I0127 13:23:32.771868 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:23:32.772390 528954 main.go:141] libmachine: Using API Version 1
I0127 13:23:32.772412 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:23:32.772771 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:23:32.772983 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:32.773187 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:23:32.774651 528954 fix.go:112] recreateIfNeeded on no-preload-325431: state=Stopped err=<nil>
I0127 13:23:32.774678 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
W0127 13:23:32.774840 528954 fix.go:138] unexpected machine state, will restart: <nil>
I0127 13:23:32.776648 528954 out.go:177] * Restarting existing kvm2 VM for "no-preload-325431" ...
I0127 13:23:32.777886 528954 main.go:141] libmachine: (no-preload-325431) Calling .Start
I0127 13:23:32.778081 528954 main.go:141] libmachine: (no-preload-325431) starting domain...
I0127 13:23:32.778104 528954 main.go:141] libmachine: (no-preload-325431) ensuring networks are active...
I0127 13:23:32.778918 528954 main.go:141] libmachine: (no-preload-325431) Ensuring network default is active
I0127 13:23:32.779290 528954 main.go:141] libmachine: (no-preload-325431) Ensuring network mk-no-preload-325431 is active
I0127 13:23:32.779607 528954 main.go:141] libmachine: (no-preload-325431) getting domain XML...
I0127 13:23:32.780385 528954 main.go:141] libmachine: (no-preload-325431) creating domain...
I0127 13:23:34.002987 528954 main.go:141] libmachine: (no-preload-325431) waiting for IP...
I0127 13:23:34.003812 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:34.004345 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:34.004417 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.004308 528989 retry.go:31] will retry after 305.177483ms: waiting for domain to come up
I0127 13:23:34.310911 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:34.311468 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:34.311494 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.311430 528989 retry.go:31] will retry after 235.274048ms: waiting for domain to come up
I0127 13:23:34.547991 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:34.548548 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:34.548572 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:34.548525 528989 retry.go:31] will retry after 476.26083ms: waiting for domain to come up
I0127 13:23:35.026210 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:35.026783 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:35.026842 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:35.026737 528989 retry.go:31] will retry after 396.169606ms: waiting for domain to come up
I0127 13:23:35.424533 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:35.425057 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:35.425090 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:35.425012 528989 retry.go:31] will retry after 661.148493ms: waiting for domain to come up
I0127 13:23:36.087979 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:36.088470 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:36.088531 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:36.088422 528989 retry.go:31] will retry after 869.822406ms: waiting for domain to come up
I0127 13:23:36.959478 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:36.959960 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:36.959992 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:36.959884 528989 retry.go:31] will retry after 1.015846086s: waiting for domain to come up
I0127 13:23:37.976977 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:37.977586 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:37.977613 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:37.977563 528989 retry.go:31] will retry after 1.224150031s: waiting for domain to come up
I0127 13:23:39.204085 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:39.204606 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:39.204630 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:39.204582 528989 retry.go:31] will retry after 1.126383211s: waiting for domain to come up
I0127 13:23:40.333113 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:40.333646 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:40.333676 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:40.333606 528989 retry.go:31] will retry after 1.430102982s: waiting for domain to come up
I0127 13:23:41.766362 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:41.766953 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:41.766983 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:41.766915 528989 retry.go:31] will retry after 1.763139948s: waiting for domain to come up
I0127 13:23:43.531472 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:43.532056 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:43.532087 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:43.532004 528989 retry.go:31] will retry after 3.488533794s: waiting for domain to come up
I0127 13:23:47.024796 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:47.025343 528954 main.go:141] libmachine: (no-preload-325431) DBG | unable to find current IP address of domain no-preload-325431 in network mk-no-preload-325431
I0127 13:23:47.025366 528954 main.go:141] libmachine: (no-preload-325431) DBG | I0127 13:23:47.025297 528989 retry.go:31] will retry after 4.076884943s: waiting for domain to come up
I0127 13:23:51.106703 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.107241 528954 main.go:141] libmachine: (no-preload-325431) found domain IP: 192.168.50.116
I0127 13:23:51.107320 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has current primary IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.107334 528954 main.go:141] libmachine: (no-preload-325431) reserving static IP address...
I0127 13:23:51.107783 528954 main.go:141] libmachine: (no-preload-325431) reserved static IP address 192.168.50.116 for domain no-preload-325431
I0127 13:23:51.107836 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "no-preload-325431", mac: "52:54:00:0d:73:1e", ip: "192.168.50.116"} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.107853 528954 main.go:141] libmachine: (no-preload-325431) waiting for SSH...
I0127 13:23:51.107893 528954 main.go:141] libmachine: (no-preload-325431) DBG | skip adding static IP to network mk-no-preload-325431 - found existing host DHCP lease matching {name: "no-preload-325431", mac: "52:54:00:0d:73:1e", ip: "192.168.50.116"}
I0127 13:23:51.107920 528954 main.go:141] libmachine: (no-preload-325431) DBG | Getting to WaitForSSH function...
I0127 13:23:51.109777 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.110148 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.110186 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.110304 528954 main.go:141] libmachine: (no-preload-325431) DBG | Using SSH client type: external
I0127 13:23:51.110347 528954 main.go:141] libmachine: (no-preload-325431) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa (-rw-------)
I0127 13:23:51.110383 528954 main.go:141] libmachine: (no-preload-325431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 13:23:51.110400 528954 main.go:141] libmachine: (no-preload-325431) DBG | About to run SSH command:
I0127 13:23:51.110405 528954 main.go:141] libmachine: (no-preload-325431) DBG | exit 0
I0127 13:23:51.231743 528954 main.go:141] libmachine: (no-preload-325431) DBG | SSH cmd err, output: <nil>:
I0127 13:23:51.232168 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetConfigRaw
I0127 13:23:51.232942 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
I0127 13:23:51.235364 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.235732 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.235764 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.236036 528954 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/config.json ...
I0127 13:23:51.236240 528954 machine.go:93] provisionDockerMachine start ...
I0127 13:23:51.236260 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:51.236474 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:51.238669 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.239024 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.239046 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.239167 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:51.239363 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.239524 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.239660 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:51.239821 528954 main.go:141] libmachine: Using SSH client type: native
I0127 13:23:51.240084 528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.116 22 <nil> <nil>}
I0127 13:23:51.240101 528954 main.go:141] libmachine: About to run SSH command:
hostname
I0127 13:23:51.339684 528954 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 13:23:51.339718 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
I0127 13:23:51.339991 528954 buildroot.go:166] provisioning hostname "no-preload-325431"
I0127 13:23:51.340016 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
I0127 13:23:51.340239 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:51.342805 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.343121 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.343171 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.343322 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:51.343528 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.343679 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.343796 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:51.343932 528954 main.go:141] libmachine: Using SSH client type: native
I0127 13:23:51.344170 528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.116 22 <nil> <nil>}
I0127 13:23:51.344188 528954 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-325431 && echo "no-preload-325431" | sudo tee /etc/hostname
I0127 13:23:51.458207 528954 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-325431
I0127 13:23:51.458243 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:51.460975 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.461420 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.461456 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.461633 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:51.461847 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.462003 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.462134 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:51.462324 528954 main.go:141] libmachine: Using SSH client type: native
I0127 13:23:51.462512 528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.116 22 <nil> <nil>}
I0127 13:23:51.462528 528954 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-325431' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-325431/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-325431' | sudo tee -a /etc/hosts;
fi
fi
I0127 13:23:51.573171 528954 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 13:23:51.573209 528954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
I0127 13:23:51.573229 528954 buildroot.go:174] setting up certificates
I0127 13:23:51.573242 528954 provision.go:84] configureAuth start
I0127 13:23:51.573250 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetMachineName
I0127 13:23:51.573567 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
I0127 13:23:51.576532 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.576940 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.576962 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.577105 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:51.579172 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.579599 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.579649 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.579746 528954 provision.go:143] copyHostCerts
I0127 13:23:51.579813 528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
I0127 13:23:51.579824 528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
I0127 13:23:51.579910 528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
I0127 13:23:51.580023 528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
I0127 13:23:51.580032 528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
I0127 13:23:51.580057 528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
I0127 13:23:51.580129 528954 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
I0127 13:23:51.580138 528954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
I0127 13:23:51.580160 528954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
I0127 13:23:51.580224 528954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.no-preload-325431 san=[127.0.0.1 192.168.50.116 localhost minikube no-preload-325431]
I0127 13:23:51.922420 528954 provision.go:177] copyRemoteCerts
I0127 13:23:51.922496 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 13:23:51.922524 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:51.925590 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.926010 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:51.926039 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:51.926360 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:51.926586 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:51.926759 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:51.926890 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:23:52.005993 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 13:23:52.032651 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 13:23:52.058042 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 13:23:52.083128 528954 provision.go:87] duration metric: took 509.868537ms to configureAuth
I0127 13:23:52.083184 528954 buildroot.go:189] setting minikube options for container-runtime
I0127 13:23:52.083429 528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:23:52.083447 528954 machine.go:96] duration metric: took 847.194107ms to provisionDockerMachine
I0127 13:23:52.083457 528954 start.go:293] postStartSetup for "no-preload-325431" (driver="kvm2")
I0127 13:23:52.083467 528954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 13:23:52.083513 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:52.083855 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 13:23:52.083886 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:52.086710 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.087095 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:52.087130 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.087342 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:52.087538 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:52.087695 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:52.087844 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:23:52.170584 528954 ssh_runner.go:195] Run: cat /etc/os-release
I0127 13:23:52.175597 528954 info.go:137] Remote host: Buildroot 2023.02.9
I0127 13:23:52.175631 528954 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
I0127 13:23:52.175710 528954 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
I0127 13:23:52.175824 528954 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
I0127 13:23:52.175958 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 13:23:52.186548 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
I0127 13:23:52.216400 528954 start.go:296] duration metric: took 132.926627ms for postStartSetup
I0127 13:23:52.216447 528954 fix.go:56] duration metric: took 19.460365477s for fixHost
I0127 13:23:52.216475 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:52.219697 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.220053 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:52.220088 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.220300 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:52.220564 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:52.220765 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:52.220919 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:52.221095 528954 main.go:141] libmachine: Using SSH client type: native
I0127 13:23:52.221263 528954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.116 22 <nil> <nil>}
I0127 13:23:52.221273 528954 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 13:23:52.320460 528954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984232.292373848
I0127 13:23:52.320493 528954 fix.go:216] guest clock: 1737984232.292373848
I0127 13:23:52.320500 528954 fix.go:229] Guest: 2025-01-27 13:23:52.292373848 +0000 UTC Remote: 2025-01-27 13:23:52.216451375 +0000 UTC m=+19.611033029 (delta=75.922473ms)
I0127 13:23:52.320558 528954 fix.go:200] guest clock delta is within tolerance: 75.922473ms
I0127 13:23:52.320565 528954 start.go:83] releasing machines lock for "no-preload-325431", held for 19.564499359s
I0127 13:23:52.320592 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:52.320893 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
I0127 13:23:52.323712 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.324056 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:52.324093 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.324255 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:52.324958 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:52.325177 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:23:52.325269 528954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 13:23:52.325320 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:52.325420 528954 ssh_runner.go:195] Run: cat /version.json
I0127 13:23:52.325450 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:23:52.327983 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.328295 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:52.328327 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.328348 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.328455 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:52.328647 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:52.328805 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:52.328806 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:52.328823 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:52.328997 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:23:52.329018 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:23:52.329160 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:23:52.329309 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:23:52.329462 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:23:52.404857 528954 ssh_runner.go:195] Run: systemctl --version
I0127 13:23:52.424319 528954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 13:23:52.430464 528954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 13:23:52.430530 528954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 13:23:52.446616 528954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 13:23:52.446646 528954 start.go:495] detecting cgroup driver to use...
I0127 13:23:52.446712 528954 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 13:23:52.474253 528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 13:23:52.488807 528954 docker.go:217] disabling cri-docker service (if available) ...
I0127 13:23:52.488890 528954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 13:23:52.503411 528954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 13:23:52.518307 528954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 13:23:52.630669 528954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 13:23:52.782751 528954 docker.go:233] disabling docker service ...
I0127 13:23:52.782837 528954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 13:23:52.797543 528954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 13:23:52.812115 528954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 13:23:52.936326 528954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 13:23:53.057723 528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 13:23:53.072402 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 13:23:53.091539 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 13:23:53.102146 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 13:23:53.112415 528954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 13:23:53.112479 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 13:23:53.123126 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:23:53.134311 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 13:23:53.145193 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:23:53.156130 528954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 13:23:53.167195 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 13:23:53.178035 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 13:23:53.188548 528954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 13:23:53.199463 528954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 13:23:53.209469 528954 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 13:23:53.209534 528954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 13:23:53.224391 528954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 13:23:53.234772 528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:23:53.350801 528954 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 13:23:53.380104 528954 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 13:23:53.380179 528954 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 13:23:53.385238 528954 retry.go:31] will retry after 1.052994237s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 13:23:54.438481 528954 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 13:23:54.444330 528954 start.go:563] Will wait 60s for crictl version
I0127 13:23:54.444395 528954 ssh_runner.go:195] Run: which crictl
I0127 13:23:54.448559 528954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 13:23:54.489223 528954 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 13:23:54.489301 528954 ssh_runner.go:195] Run: containerd --version
I0127 13:23:54.515421 528954 ssh_runner.go:195] Run: containerd --version
I0127 13:23:54.544703 528954 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 13:23:54.545920 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetIP
I0127 13:23:54.548686 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:54.549043 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:23:54.549075 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:23:54.549338 528954 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0127 13:23:54.554275 528954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:23:54.567128 528954 kubeadm.go:883] updating cluster {Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 13:23:54.567304 528954 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:23:54.567358 528954 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:23:54.605284 528954 containerd.go:627] all images are preloaded for containerd runtime.
I0127 13:23:54.605318 528954 cache_images.go:84] Images are preloaded, skipping loading
I0127 13:23:54.605328 528954 kubeadm.go:934] updating node { 192.168.50.116 8443 v1.32.1 containerd true true} ...
I0127 13:23:54.605459 528954 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-325431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.116
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 13:23:54.605536 528954 ssh_runner.go:195] Run: sudo crictl info
I0127 13:23:54.641877 528954 cni.go:84] Creating CNI manager for ""
I0127 13:23:54.641902 528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:23:54.641913 528954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 13:23:54.641935 528954 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-325431 NodeName:no-preload-325431 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 13:23:54.642062 528954 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.116
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-325431"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.50.116"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 13:23:54.642146 528954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 13:23:54.652780 528954 binaries.go:44] Found k8s binaries, skipping transfer
I0127 13:23:54.652853 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 13:23:54.662470 528954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0127 13:23:54.680212 528954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 13:23:54.697880 528954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
I0127 13:23:54.715864 528954 ssh_runner.go:195] Run: grep 192.168.50.116 control-plane.minikube.internal$ /etc/hosts
I0127 13:23:54.719880 528954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:23:54.732808 528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:23:54.847512 528954 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:23:54.867242 528954 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431 for IP: 192.168.50.116
I0127 13:23:54.867288 528954 certs.go:194] generating shared ca certs ...
I0127 13:23:54.867312 528954 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:23:54.867512 528954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
I0127 13:23:54.867569 528954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
I0127 13:23:54.867590 528954 certs.go:256] generating profile certs ...
I0127 13:23:54.867717 528954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/client.key
I0127 13:23:54.867803 528954 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.key.00944cb6
I0127 13:23:54.867870 528954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.key
I0127 13:23:54.868039 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
W0127 13:23:54.868090 528954 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
I0127 13:23:54.868103 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
I0127 13:23:54.868137 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
I0127 13:23:54.868169 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
I0127 13:23:54.868205 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
I0127 13:23:54.868260 528954 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
I0127 13:23:54.868948 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 13:23:54.916286 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 13:23:54.951595 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 13:23:54.985978 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 13:23:55.017210 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 13:23:55.046840 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 13:23:55.080541 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 13:23:55.107806 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/no-preload-325431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 13:23:55.134194 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
I0127 13:23:55.158899 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 13:23:55.183077 528954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
I0127 13:23:55.208128 528954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 13:23:55.225606 528954 ssh_runner.go:195] Run: openssl version
I0127 13:23:55.231583 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
I0127 13:23:55.242957 528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
I0127 13:23:55.247769 528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
I0127 13:23:55.247833 528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
I0127 13:23:55.253810 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
I0127 13:23:55.264734 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
I0127 13:23:55.275229 528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
I0127 13:23:55.279764 528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
I0127 13:23:55.279820 528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
I0127 13:23:55.285356 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
I0127 13:23:55.296693 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 13:23:55.307430 528954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 13:23:55.311970 528954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
I0127 13:23:55.312034 528954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 13:23:55.317751 528954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 13:23:55.328528 528954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 13:23:55.333031 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 13:23:55.339165 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 13:23:55.345030 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 13:23:55.351000 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 13:23:55.357110 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 13:23:55.362931 528954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 13:23:55.368994 528954 kubeadm.go:392] StartCluster: {Name:no-preload-325431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-325431 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:23:55.369085 528954 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 13:23:55.369182 528954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 13:23:55.408100 528954 cri.go:89] found id: "c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf"
I0127 13:23:55.408125 528954 cri.go:89] found id: "0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9"
I0127 13:23:55.408128 528954 cri.go:89] found id: "d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3"
I0127 13:23:55.408131 528954 cri.go:89] found id: "dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10"
I0127 13:23:55.408136 528954 cri.go:89] found id: "223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18"
I0127 13:23:55.408138 528954 cri.go:89] found id: "ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d"
I0127 13:23:55.408141 528954 cri.go:89] found id: "996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2"
I0127 13:23:55.408144 528954 cri.go:89] found id: ""
I0127 13:23:55.408189 528954 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 13:23:55.423754 528954 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T13:23:55Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 13:23:55.423854 528954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 13:23:55.434276 528954 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 13:23:55.434299 528954 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 13:23:55.434350 528954 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 13:23:55.444034 528954 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 13:23:55.445020 528954 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-325431" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:23:55.445645 528954 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-325431" cluster setting kubeconfig missing "no-preload-325431" context setting]
I0127 13:23:55.446625 528954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:23:55.448630 528954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 13:23:55.458286 528954 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.116
I0127 13:23:55.458319 528954 kubeadm.go:1160] stopping kube-system containers ...
I0127 13:23:55.458337 528954 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 13:23:55.458408 528954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 13:23:55.499889 528954 cri.go:89] found id: "c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf"
I0127 13:23:55.499914 528954 cri.go:89] found id: "0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9"
I0127 13:23:55.499920 528954 cri.go:89] found id: "d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3"
I0127 13:23:55.499926 528954 cri.go:89] found id: "dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10"
I0127 13:23:55.499930 528954 cri.go:89] found id: "223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18"
I0127 13:23:55.499941 528954 cri.go:89] found id: "ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d"
I0127 13:23:55.499945 528954 cri.go:89] found id: "996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2"
I0127 13:23:55.499948 528954 cri.go:89] found id: ""
I0127 13:23:55.499956 528954 cri.go:252] Stopping containers: [c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf 0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9 d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3 dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10 223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18 ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d 996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2]
I0127 13:23:55.500016 528954 ssh_runner.go:195] Run: which crictl
I0127 13:23:55.504252 528954 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c68fbcf444499cb39d7294187cae28b551fc12a41a7d7575e7d1421329e25bbf 0a4c5593d50184f31756d9cbe22f35cc64a4493696b220236fc8cd336bea80c9 d5f44fa632f1ba0498db2368e5d356b6d42d159b21888ca6f03c332776dd90a3 dc49ce7264abdab1ac0f54d3d0bca6e69e0ade71ae40edcef409d23840d99e10 223a386bbf08aebb7e1c728f9978424cab03999be2b38aec285e563dee72ad18 ec1138b13b4d8060d23ab37dfadff9c7a064e3ffd21cbff59c3b64d7a18e088d 996af3e916c92d0d5e13ae41c60e5e4563818028ed964075bff55b000dfbfad2
I0127 13:23:55.543959 528954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 13:23:55.561290 528954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 13:23:55.571243 528954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 13:23:55.571286 528954 kubeadm.go:157] found existing configuration files:
I0127 13:23:55.571341 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 13:23:55.580728 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 13:23:55.580802 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 13:23:55.590469 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 13:23:55.599442 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 13:23:55.599505 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 13:23:55.608639 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 13:23:55.617810 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 13:23:55.617866 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 13:23:55.627449 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 13:23:55.636352 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 13:23:55.636414 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 13:23:55.646169 528954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 13:23:55.655678 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:23:55.781022 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:23:56.984649 528954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.20358425s)
I0127 13:23:56.984691 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:23:57.193584 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:23:57.283053 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:23:57.356286 528954 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:23:57.356415 528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:23:57.856971 528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:23:58.357257 528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:23:58.857175 528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:23:58.873326 528954 api_server.go:72] duration metric: took 1.517043726s to wait for apiserver process to appear ...
I0127 13:23:58.873352 528954 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:23:58.873375 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:24:00.973587 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 13:24:00.973620 528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 13:24:00.973641 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:24:01.002147 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 13:24:01.002185 528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 13:24:01.373719 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:24:01.378715 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 13:24:01.378743 528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 13:24:01.874416 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:24:01.880211 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 13:24:01.880238 528954 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 13:24:02.373621 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:24:02.379055 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
ok
I0127 13:24:02.387805 528954 api_server.go:141] control plane version: v1.32.1
I0127 13:24:02.387834 528954 api_server.go:131] duration metric: took 3.514474808s to wait for apiserver health ...
I0127 13:24:02.387843 528954 cni.go:84] Creating CNI manager for ""
I0127 13:24:02.387850 528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:24:02.389582 528954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 13:24:02.391147 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 13:24:02.406580 528954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 13:24:02.436722 528954 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:24:02.481735 528954 system_pods.go:59] 8 kube-system pods found
I0127 13:24:02.481792 528954 system_pods.go:61] "coredns-668d6bf9bc-bf8dx" [17e4173a-79c1-4a5b-be36-b1bd729f60ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 13:24:02.481817 528954 system_pods.go:61] "etcd-no-preload-325431" [d6e0d509-1ce1-403f-b611-ea6aafe35cb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 13:24:02.481828 528954 system_pods.go:61] "kube-apiserver-no-preload-325431" [a389cfe9-f329-492d-bde1-060abc8566b1] Running
I0127 13:24:02.481849 528954 system_pods.go:61] "kube-controller-manager-no-preload-325431" [cc0b544b-4e68-42e2-a648-8169e71b3dab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 13:24:02.481859 528954 system_pods.go:61] "kube-proxy-l848r" [f21a5889-e77f-4758-85b4-4a3690aa5ac5] Running
I0127 13:24:02.481865 528954 system_pods.go:61] "kube-scheduler-no-preload-325431" [458c9ea7-9b2d-4f95-8327-95a1d758b6d4] Running
I0127 13:24:02.481876 528954 system_pods.go:61] "metrics-server-f79f97bbb-8xzvp" [4697d44a-38ad-4036-b70d-9b1adb06b4fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:24:02.481889 528954 system_pods.go:61] "storage-provisioner" [2438a5ef-b375-4b61-8e3c-d06546af3cf3] Running
I0127 13:24:02.481898 528954 system_pods.go:74] duration metric: took 45.15227ms to wait for pod list to return data ...
I0127 13:24:02.481913 528954 node_conditions.go:102] verifying NodePressure condition ...
I0127 13:24:02.487615 528954 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 13:24:02.487650 528954 node_conditions.go:123] node cpu capacity is 2
I0127 13:24:02.487665 528954 node_conditions.go:105] duration metric: took 5.744059ms to run NodePressure ...
I0127 13:24:02.487690 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:24:02.953730 528954 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0127 13:24:02.962714 528954 kubeadm.go:739] kubelet initialised
I0127 13:24:02.962743 528954 kubeadm.go:740] duration metric: took 8.973475ms waiting for restarted kubelet to initialise ...
I0127 13:24:02.962754 528954 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:24:03.064260 528954 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace to be "Ready" ...
I0127 13:24:05.071758 528954 pod_ready.go:103] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:07.570901 528954 pod_ready.go:103] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:08.071180 528954 pod_ready.go:93] pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:08.071211 528954 pod_ready.go:82] duration metric: took 5.006908748s for pod "coredns-668d6bf9bc-bf8dx" in "kube-system" namespace to be "Ready" ...
I0127 13:24:08.071222 528954 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:10.082734 528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:12.578331 528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:14.579534 528954 pod_ready.go:103] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:15.577894 528954 pod_ready.go:93] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:15.577926 528954 pod_ready.go:82] duration metric: took 7.506694818s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:15.577940 528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.585574 528954 pod_ready.go:93] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:16.585599 528954 pod_ready.go:82] duration metric: took 1.007650863s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.585610 528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.591406 528954 pod_ready.go:93] pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:16.591436 528954 pod_ready.go:82] duration metric: took 5.818528ms for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.591452 528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l848r" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.598966 528954 pod_ready.go:93] pod "kube-proxy-l848r" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:16.598993 528954 pod_ready.go:82] duration metric: took 7.533761ms for pod "kube-proxy-l848r" in "kube-system" namespace to be "Ready" ...
I0127 13:24:16.599003 528954 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:17.605675 528954 pod_ready.go:93] pod "kube-scheduler-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:24:17.605704 528954 pod_ready.go:82] duration metric: took 1.006693331s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:24:17.605715 528954 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" ...
I0127 13:24:19.613122 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:21.613411 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:24.121198 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:26.613281 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:29.116476 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:31.614314 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:33.617135 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:36.113377 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:38.113918 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:40.614527 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:43.112343 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:45.113298 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:47.611855 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:49.612730 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:52.113290 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:54.113822 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:56.115084 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:24:58.614721 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:00.615475 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:03.114539 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:05.614785 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:08.112067 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:10.114136 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:12.614813 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:15.114911 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:17.613235 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:19.615069 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:22.112490 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:24.113931 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:26.612939 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:28.614150 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:31.114020 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:33.617512 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:35.621341 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:38.113791 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:40.612566 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:42.613649 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:45.112527 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:47.613287 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:50.112295 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:52.613335 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:54.613841 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:57.112972 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:25:59.113212 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:01.119015 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:03.613240 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:06.113510 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:08.612687 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:10.613660 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:13.112583 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:15.615178 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:18.112755 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:20.112926 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:22.113224 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:24.612860 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:26.613550 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:29.112197 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:31.613704 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:34.114073 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:36.613136 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:38.613720 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:41.113190 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:43.613305 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:45.614221 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:48.112358 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:50.114916 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:52.612994 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:54.613846 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:57.113493 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:26:59.613114 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:01.613502 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:03.614276 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:06.113397 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:08.613516 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:10.613838 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:13.112643 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:15.113094 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:17.611773 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:19.612923 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:21.613915 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:24.115614 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:26.613303 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:29.112954 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:31.613362 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:33.613747 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:35.614095 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:38.113248 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:40.113409 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:42.612479 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:44.612720 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:47.113541 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:49.113724 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:51.613977 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:53.614024 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:56.114005 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:27:58.115005 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:00.613284 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:02.613392 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:04.613875 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:06.618352 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:09.113660 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:11.613942 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:14.113721 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:16.613032 528954 pod_ready.go:103] pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:17.606211 528954 pod_ready.go:82] duration metric: took 4m0.000478536s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" ...
E0127 13:28:17.606244 528954 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8xzvp" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 13:28:17.606268 528954 pod_ready.go:39] duration metric: took 4m14.643501676s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:28:17.606320 528954 kubeadm.go:597] duration metric: took 4m22.172013871s to restartPrimaryControlPlane
W0127 13:28:17.606408 528954 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 13:28:17.606449 528954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 13:28:19.440328 528954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.83384135s)
I0127 13:28:19.440434 528954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 13:28:19.457247 528954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 13:28:19.468454 528954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 13:28:19.479090 528954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 13:28:19.479120 528954 kubeadm.go:157] found existing configuration files:
I0127 13:28:19.479176 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 13:28:19.489428 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 13:28:19.489513 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 13:28:19.500168 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 13:28:19.513940 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 13:28:19.514000 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 13:28:19.526564 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 13:28:19.536966 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 13:28:19.537051 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 13:28:19.547626 528954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 13:28:19.557566 528954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 13:28:19.557652 528954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 13:28:19.568536 528954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 13:28:19.733134 528954 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 13:28:29.507095 528954 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 13:28:29.507181 528954 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 13:28:29.507303 528954 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 13:28:29.507433 528954 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 13:28:29.507569 528954 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 13:28:29.507651 528954 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 13:28:29.555822 528954 out.go:235] - Generating certificates and keys ...
I0127 13:28:29.555980 528954 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 13:28:29.556057 528954 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 13:28:29.556164 528954 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 13:28:29.556257 528954 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 13:28:29.556362 528954 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 13:28:29.556450 528954 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 13:28:29.556534 528954 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 13:28:29.556621 528954 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 13:28:29.556725 528954 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 13:28:29.556836 528954 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 13:28:29.556899 528954 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 13:28:29.556989 528954 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 13:28:29.557062 528954 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 13:28:29.557154 528954 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 13:28:29.557231 528954 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 13:28:29.557321 528954 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 13:28:29.557467 528954 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 13:28:29.557589 528954 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 13:28:29.557650 528954 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 13:28:29.559497 528954 out.go:235] - Booting up control plane ...
I0127 13:28:29.559615 528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 13:28:29.559733 528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 13:28:29.559822 528954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 13:28:29.559954 528954 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 13:28:29.560102 528954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 13:28:29.560178 528954 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 13:28:29.560313 528954 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 13:28:29.560450 528954 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 13:28:29.560525 528954 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.686794ms
I0127 13:28:29.560617 528954 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 13:28:29.560673 528954 kubeadm.go:310] [api-check] The API server is healthy after 6.003038304s
I0127 13:28:29.560795 528954 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 13:28:29.560965 528954 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 13:28:29.561040 528954 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 13:28:29.561242 528954 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-325431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 13:28:29.561318 528954 kubeadm.go:310] [bootstrap-token] Using token: ec8dk3.k4ocr1751q2as6lm
I0127 13:28:29.563363 528954 out.go:235] - Configuring RBAC rules ...
I0127 13:28:29.563514 528954 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 13:28:29.563634 528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 13:28:29.563884 528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 13:28:29.564032 528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 13:28:29.564184 528954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 13:28:29.564302 528954 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 13:28:29.564447 528954 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 13:28:29.564512 528954 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 13:28:29.564552 528954 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 13:28:29.564556 528954 kubeadm.go:310]
I0127 13:28:29.564605 528954 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 13:28:29.564608 528954 kubeadm.go:310]
I0127 13:28:29.564675 528954 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 13:28:29.564678 528954 kubeadm.go:310]
I0127 13:28:29.564700 528954 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 13:28:29.564747 528954 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 13:28:29.564792 528954 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 13:28:29.564795 528954 kubeadm.go:310]
I0127 13:28:29.564866 528954 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 13:28:29.564872 528954 kubeadm.go:310]
I0127 13:28:29.564922 528954 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 13:28:29.564926 528954 kubeadm.go:310]
I0127 13:28:29.564991 528954 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 13:28:29.565074 528954 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 13:28:29.565163 528954 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 13:28:29.565168 528954 kubeadm.go:310]
I0127 13:28:29.565262 528954 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 13:28:29.565346 528954 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 13:28:29.565350 528954 kubeadm.go:310]
I0127 13:28:29.565421 528954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ec8dk3.k4ocr1751q2as6lm \
I0127 13:28:29.565504 528954 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
I0127 13:28:29.565528 528954 kubeadm.go:310] --control-plane
I0127 13:28:29.565534 528954 kubeadm.go:310]
I0127 13:28:29.565640 528954 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 13:28:29.565647 528954 kubeadm.go:310]
I0127 13:28:29.565721 528954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ec8dk3.k4ocr1751q2as6lm \
I0127 13:28:29.565880 528954 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2
I0127 13:28:29.565896 528954 cni.go:84] Creating CNI manager for ""
I0127 13:28:29.565905 528954 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:28:29.571921 528954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 13:28:29.573671 528954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 13:28:29.600549 528954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 13:28:29.632214 528954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 13:28:29.632318 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:29.632503 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-325431 minikube.k8s.io/updated_at=2025_01_27T13_28_29_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=no-preload-325431 minikube.k8s.io/primary=true
I0127 13:28:29.658309 528954 ops.go:34] apiserver oom_adj: -16
I0127 13:28:30.154694 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:30.655330 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:31.154961 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:31.654793 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:32.155389 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:32.655001 528954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:28:32.770122 528954 kubeadm.go:1113] duration metric: took 3.137876229s to wait for elevateKubeSystemPrivileges
I0127 13:28:32.770176 528954 kubeadm.go:394] duration metric: took 4m37.401187954s to StartCluster
I0127 13:28:32.770204 528954 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:28:32.770307 528954 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:28:32.771338 528954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:28:32.771619 528954 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:28:32.771757 528954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 13:28:32.771867 528954 config.go:182] Loaded profile config "no-preload-325431": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:28:32.771877 528954 addons.go:69] Setting storage-provisioner=true in profile "no-preload-325431"
I0127 13:28:32.771896 528954 addons.go:238] Setting addon storage-provisioner=true in "no-preload-325431"
W0127 13:28:32.771912 528954 addons.go:247] addon storage-provisioner should already be in state true
I0127 13:28:32.771924 528954 addons.go:69] Setting metrics-server=true in profile "no-preload-325431"
I0127 13:28:32.771940 528954 addons.go:238] Setting addon metrics-server=true in "no-preload-325431"
I0127 13:28:32.771948 528954 host.go:66] Checking if "no-preload-325431" exists ...
I0127 13:28:32.771951 528954 addons.go:69] Setting dashboard=true in profile "no-preload-325431"
I0127 13:28:32.771971 528954 addons.go:238] Setting addon dashboard=true in "no-preload-325431"
W0127 13:28:32.771985 528954 addons.go:247] addon dashboard should already be in state true
I0127 13:28:32.772026 528954 host.go:66] Checking if "no-preload-325431" exists ...
I0127 13:28:32.772339 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.772381 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.772444 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.772491 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
W0127 13:28:32.771954 528954 addons.go:247] addon metrics-server should already be in state true
I0127 13:28:32.771931 528954 addons.go:69] Setting default-storageclass=true in profile "no-preload-325431"
I0127 13:28:32.772561 528954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-325431"
I0127 13:28:32.772704 528954 host.go:66] Checking if "no-preload-325431" exists ...
I0127 13:28:32.773018 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.773059 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.773063 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.773106 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.773684 528954 out.go:177] * Verifying Kubernetes components...
I0127 13:28:32.775484 528954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:28:32.791534 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
I0127 13:28:32.792145 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.792826 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.792857 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.792949 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
I0127 13:28:32.792988 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
I0127 13:28:32.793322 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
I0127 13:28:32.793488 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.793579 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.793653 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.793708 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:28:32.793967 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.793989 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.794127 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.794144 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.794498 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.794531 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.794779 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.795535 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.795556 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.795851 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.795888 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.797017 528954 addons.go:238] Setting addon default-storageclass=true in "no-preload-325431"
W0127 13:28:32.797035 528954 addons.go:247] addon default-storageclass should already be in state true
I0127 13:28:32.797068 528954 host.go:66] Checking if "no-preload-325431" exists ...
I0127 13:28:32.797418 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.797453 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.797741 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.797777 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.797977 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.798620 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.798660 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.817426 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
I0127 13:28:32.817901 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.818380 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.818399 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.818715 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.818907 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:28:32.821099 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:28:32.821782 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
I0127 13:28:32.822281 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.822811 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.822835 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.823252 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.823879 528954 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 13:28:32.825375 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
I0127 13:28:32.825970 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.826674 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.826699 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.827070 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.827808 528954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:32.827868 528954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:32.828111 528954 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 13:28:32.828544 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:28:32.829570 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 13:28:32.829601 528954 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 13:28:32.829627 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:28:32.831338 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:28:32.834827 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.835365 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:28:32.835387 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.835758 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:28:32.835988 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:28:32.836173 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:28:32.836364 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:28:32.837086 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
I0127 13:28:32.837418 528954 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 13:28:32.837500 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.838122 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.838148 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.838640 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.838813 528954 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 13:28:32.838830 528954 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 13:28:32.838853 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:28:32.838871 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:28:32.841521 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:28:32.843361 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.843995 528954 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 13:28:32.844249 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:28:32.844286 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.844647 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:28:32.844886 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:28:32.845200 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:28:32.845386 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:28:32.845938 528954 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:28:32.845958 528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 13:28:32.845976 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:28:32.848694 528954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
I0127 13:28:32.849174 528954 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:32.849648 528954 main.go:141] libmachine: Using API Version 1
I0127 13:28:32.849668 528954 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:32.849887 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.850116 528954 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:32.850322 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetState
I0127 13:28:32.850423 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:28:32.850486 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.850698 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:28:32.850901 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:28:32.851130 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:28:32.851341 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:28:32.852026 528954 main.go:141] libmachine: (no-preload-325431) Calling .DriverName
I0127 13:28:32.852266 528954 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 13:28:32.852280 528954 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 13:28:32.852294 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHHostname
I0127 13:28:32.855632 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.856244 528954 main.go:141] libmachine: (no-preload-325431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:73:1e", ip: ""} in network mk-no-preload-325431: {Iface:virbr2 ExpiryTime:2025-01-27 14:23:44 +0000 UTC Type:0 Mac:52:54:00:0d:73:1e Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:no-preload-325431 Clientid:01:52:54:00:0d:73:1e}
I0127 13:28:32.856261 528954 main.go:141] libmachine: (no-preload-325431) DBG | domain no-preload-325431 has defined IP address 192.168.50.116 and MAC address 52:54:00:0d:73:1e in network mk-no-preload-325431
I0127 13:28:32.856511 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHPort
I0127 13:28:32.856742 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHKeyPath
I0127 13:28:32.856887 528954 main.go:141] libmachine: (no-preload-325431) Calling .GetSSHUsername
I0127 13:28:32.857019 528954 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/no-preload-325431/id_rsa Username:docker}
I0127 13:28:33.006015 528954 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:28:33.027005 528954 node_ready.go:35] waiting up to 6m0s for node "no-preload-325431" to be "Ready" ...
I0127 13:28:33.066405 528954 node_ready.go:49] node "no-preload-325431" has status "Ready":"True"
I0127 13:28:33.066442 528954 node_ready.go:38] duration metric: took 39.39561ms for node "no-preload-325431" to be "Ready" ...
I0127 13:28:33.066457 528954 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:28:33.104507 528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:28:33.115586 528954 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:33.198966 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 13:28:33.199005 528954 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 13:28:33.252334 528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 13:28:33.252374 528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 13:28:33.252518 528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:28:33.268119 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 13:28:33.268153 528954 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 13:28:33.353884 528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 13:28:33.353918 528954 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 13:28:33.363468 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 13:28:33.363509 528954 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 13:28:33.429294 528954 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:28:33.429332 528954 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 13:28:33.469451 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 13:28:33.469488 528954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 13:28:33.516000 528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:28:33.609014 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 13:28:33.609050 528954 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 13:28:33.663870 528954 pod_ready.go:93] pod "etcd-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:28:33.663902 528954 pod_ready.go:82] duration metric: took 548.28046ms for pod "etcd-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:33.663918 528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:33.743380 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 13:28:33.743415 528954 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 13:28:33.906899 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 13:28:33.906931 528954 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 13:28:33.989880 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 13:28:33.989985 528954 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 13:28:34.084465 528954 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:28:34.084497 528954 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 13:28:34.157593 528954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:28:34.559022 528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.454458733s)
I0127 13:28:34.559092 528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.306537266s)
I0127 13:28:34.559153 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:34.559099 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:34.559215 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:34.559175 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:34.559617 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:34.559636 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:34.559652 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:34.559661 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:34.559760 528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
I0127 13:28:34.559812 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:34.559830 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:34.559842 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:34.559875 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:34.559893 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:34.559951 528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
I0127 13:28:34.559880 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:34.560364 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:34.560386 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:34.587657 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:34.587694 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:34.588235 528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
I0127 13:28:34.588257 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:34.588306 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:35.333995 528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.817938304s)
I0127 13:28:35.334057 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:35.334071 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:35.334464 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:35.334497 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:35.334508 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:35.334516 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:35.334790 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:35.334814 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:35.334827 528954 addons.go:479] Verifying addon metrics-server=true in "no-preload-325431"
I0127 13:28:35.686543 528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:36.551697 528954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.394050922s)
I0127 13:28:36.551766 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:36.551778 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:36.552197 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:36.552291 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:36.552336 528954 main.go:141] libmachine: Making call to close driver server
I0127 13:28:36.552379 528954 main.go:141] libmachine: (no-preload-325431) Calling .Close
I0127 13:28:36.552264 528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
I0127 13:28:36.554273 528954 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:28:36.554297 528954 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:28:36.554277 528954 main.go:141] libmachine: (no-preload-325431) DBG | Closing plugin on server side
I0127 13:28:36.556095 528954 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-325431 addons enable metrics-server
I0127 13:28:36.557682 528954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 13:28:36.559221 528954 addons.go:514] duration metric: took 3.787479018s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 13:28:38.171680 528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:40.671375 528954 pod_ready.go:103] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:41.171716 528954 pod_ready.go:93] pod "kube-apiserver-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:28:41.171759 528954 pod_ready.go:82] duration metric: took 7.507831849s for pod "kube-apiserver-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:41.171776 528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:41.177006 528954 pod_ready.go:93] pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:28:41.177037 528954 pod_ready.go:82] duration metric: took 5.251769ms for pod "kube-controller-manager-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:41.177051 528954 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:41.185589 528954 pod_ready.go:93] pod "kube-scheduler-no-preload-325431" in "kube-system" namespace has status "Ready":"True"
I0127 13:28:41.185623 528954 pod_ready.go:82] duration metric: took 8.562889ms for pod "kube-scheduler-no-preload-325431" in "kube-system" namespace to be "Ready" ...
I0127 13:28:41.185635 528954 pod_ready.go:39] duration metric: took 8.119162889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:28:41.185667 528954 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:28:41.185750 528954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:28:41.211566 528954 api_server.go:72] duration metric: took 8.439896874s to wait for apiserver process to appear ...
I0127 13:28:41.211674 528954 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:28:41.211718 528954 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
I0127 13:28:41.218905 528954 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
ok
I0127 13:28:41.221906 528954 api_server.go:141] control plane version: v1.32.1
I0127 13:28:41.221942 528954 api_server.go:131] duration metric: took 10.24564ms to wait for apiserver health ...
I0127 13:28:41.221954 528954 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:28:41.237725 528954 system_pods.go:59] 9 kube-system pods found
I0127 13:28:41.237853 528954 system_pods.go:61] "coredns-668d6bf9bc-4qzkt" [07cf0c66-5805-4c95-81d5-88276ae8634f] Running
I0127 13:28:41.237881 528954 system_pods.go:61] "coredns-668d6bf9bc-hpb7s" [73baecfb-5361-4d5f-b11d-a8b361f28fb8] Running
I0127 13:28:41.237910 528954 system_pods.go:61] "etcd-no-preload-325431" [7b6f6b5c-6e2d-425b-9311-565ea323e42d] Running
I0127 13:28:41.237933 528954 system_pods.go:61] "kube-apiserver-no-preload-325431" [edbe877d-de59-41e4-9bc4-0f11b4b191aa] Running
I0127 13:28:41.237956 528954 system_pods.go:61] "kube-controller-manager-no-preload-325431" [01168381-3ea7-4439-8ba7-d31dbee82a05] Running
I0127 13:28:41.237971 528954 system_pods.go:61] "kube-proxy-sxztd" [b2ce07c8-7354-4a9d-87a4-af9c46bf3ad3] Running
I0127 13:28:41.237985 528954 system_pods.go:61] "kube-scheduler-no-preload-325431" [b20fc6de-09d5-4db0-a1b2-d20570df69b1] Running
I0127 13:28:41.238019 528954 system_pods.go:61] "metrics-server-f79f97bbb-z7vjh" [f904e246-cad3-4c86-8a01-f8eea49bf563] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:28:41.238035 528954 system_pods.go:61] "storage-provisioner" [241c0e33-1145-46f6-abbe-f7e75ada3578] Running
I0127 13:28:41.238058 528954 system_pods.go:74] duration metric: took 16.0946ms to wait for pod list to return data ...
I0127 13:28:41.238100 528954 default_sa.go:34] waiting for default service account to be created ...
I0127 13:28:41.242966 528954 default_sa.go:45] found service account: "default"
I0127 13:28:41.242995 528954 default_sa.go:55] duration metric: took 4.876772ms for default service account to be created ...
I0127 13:28:41.243009 528954 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 13:28:41.250843 528954 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-325431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-325431 -n no-preload-325431
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-325431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-325431 logs -n 25: (1.487259668s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| addons | enable metrics-server -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:24 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-325431 | no-preload-325431 | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | 27 Jan 25 13:23 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-325431 | no-preload-325431 | jenkins | v1.35.0 | 27 Jan 25 13:23 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p embed-certs-766944 | embed-certs-766944 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-766944 | embed-certs-766944 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-325510 | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-325510 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | |
| | default-k8s-diff-port-325510 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:24 UTC | 27 Jan 25 13:27 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | old-k8s-version-116657 image | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
| delete | -p old-k8s-version-116657 | old-k8s-version-116657 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
| start | -p newest-cni-296225 --memory=2200 --alsologtostderr | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:28 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-296225 --memory=2200 --alsologtostderr | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:29 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | newest-cni-296225 image list | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
| delete | -p newest-cni-296225 | newest-cni-296225 | jenkins | v1.35.0 | 27 Jan 25 13:29 UTC | 27 Jan 25 13:29 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 13:28:56
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 13:28:56.167206 531586 out.go:345] Setting OutFile to fd 1 ...
I0127 13:28:56.167420 531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:28:56.167436 531586 out.go:358] Setting ErrFile to fd 2...
I0127 13:28:56.167442 531586 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:28:56.167737 531586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-466901/.minikube/bin
I0127 13:28:56.168827 531586 out.go:352] Setting JSON to false
I0127 13:28:56.169977 531586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":36633,"bootTime":1737947903,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 13:28:56.170093 531586 start.go:139] virtualization: kvm guest
I0127 13:28:56.172461 531586 out.go:177] * [newest-cni-296225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 13:28:56.174020 531586 notify.go:220] Checking for updates...
I0127 13:28:56.174033 531586 out.go:177] - MINIKUBE_LOCATION=20317
I0127 13:28:56.175512 531586 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 13:28:56.176838 531586 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:28:56.178184 531586 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-466901/.minikube
I0127 13:28:56.179518 531586 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 13:28:56.180891 531586 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 13:28:56.182708 531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:28:56.183131 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:56.183194 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:56.200308 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
I0127 13:28:56.201060 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:56.201765 531586 main.go:141] libmachine: Using API Version 1
I0127 13:28:56.201797 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:56.202181 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:56.202408 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:28:56.202728 531586 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 13:28:56.203250 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:56.203319 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:56.220011 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
I0127 13:28:56.220435 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:56.220978 531586 main.go:141] libmachine: Using API Version 1
I0127 13:28:56.221006 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:56.221409 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:56.221606 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:28:56.258580 531586 out.go:177] * Using the kvm2 driver based on existing profile
I0127 13:28:56.260066 531586 start.go:297] selected driver: kvm2
I0127 13:28:56.260097 531586 start.go:901] validating driver "kvm2" against &{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:28:56.260225 531586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 13:28:56.260938 531586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:28:56.261024 531586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-466901/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 13:28:56.277111 531586 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 13:28:56.277523 531586 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 13:28:56.277560 531586 cni.go:84] Creating CNI manager for ""
I0127 13:28:56.277605 531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:28:56.277639 531586 start.go:340] cluster config:
{Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:28:56.277740 531586 iso.go:125] acquiring lock: {Name:mkcc3db98c9d4661e75c49bd9b203d0232dff8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:28:56.280361 531586 out.go:177] * Starting "newest-cni-296225" primary control-plane node in "newest-cni-296225" cluster
I0127 13:28:56.281606 531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:28:56.281678 531586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 13:28:56.281692 531586 cache.go:56] Caching tarball of preloaded images
I0127 13:28:56.281783 531586 preload.go:172] Found /home/jenkins/minikube-integration/20317-466901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 13:28:56.281796 531586 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 13:28:56.281935 531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
I0127 13:28:56.282191 531586 start.go:360] acquireMachinesLock for newest-cni-296225: {Name:mke115b779db52cb0a5f0a05f83d5bad0a35c561 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 13:28:56.282273 531586 start.go:364] duration metric: took 45.538µs to acquireMachinesLock for "newest-cni-296225"
I0127 13:28:56.282297 531586 start.go:96] Skipping create...Using existing machine configuration
I0127 13:28:56.282306 531586 fix.go:54] fixHost starting:
I0127 13:28:56.282589 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:28:56.282621 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:28:56.298876 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
I0127 13:28:56.299391 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:28:56.299946 531586 main.go:141] libmachine: Using API Version 1
I0127 13:28:56.299975 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:28:56.300339 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:28:56.300605 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:28:56.300813 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:28:56.302631 531586 fix.go:112] recreateIfNeeded on newest-cni-296225: state=Stopped err=<nil>
I0127 13:28:56.302659 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
W0127 13:28:56.302822 531586 fix.go:138] unexpected machine state, will restart: <nil>
I0127 13:28:56.304762 531586 out.go:177] * Restarting existing kvm2 VM for "newest-cni-296225" ...
I0127 13:28:53.806392 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:55.806518 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:57.808012 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:28:55.406991 529251 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.84407049s)
I0127 13:28:55.407062 529251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 13:28:55.426120 529251 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 13:28:55.438195 529251 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 13:28:55.457399 529251 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 13:28:55.457425 529251 kubeadm.go:157] found existing configuration files:
I0127 13:28:55.457485 529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 13:28:55.469544 529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 13:28:55.469611 529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 13:28:55.481065 529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 13:28:55.492868 529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 13:28:55.492928 529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 13:28:55.505930 529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 13:28:55.517268 529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 13:28:55.517332 529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 13:28:55.528681 529251 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 13:28:55.539678 529251 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 13:28:55.539755 529251 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 13:28:55.550987 529251 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 13:28:55.719870 529251 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 13:28:56.306046 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Start
I0127 13:28:56.306254 531586 main.go:141] libmachine: (newest-cni-296225) starting domain...
I0127 13:28:56.306277 531586 main.go:141] libmachine: (newest-cni-296225) ensuring networks are active...
I0127 13:28:56.307157 531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network default is active
I0127 13:28:56.307587 531586 main.go:141] libmachine: (newest-cni-296225) Ensuring network mk-newest-cni-296225 is active
I0127 13:28:56.307960 531586 main.go:141] libmachine: (newest-cni-296225) getting domain XML...
I0127 13:28:56.308646 531586 main.go:141] libmachine: (newest-cni-296225) creating domain...
I0127 13:28:57.604425 531586 main.go:141] libmachine: (newest-cni-296225) waiting for IP...
I0127 13:28:57.605479 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:57.606123 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:57.606254 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.606079 531622 retry.go:31] will retry after 235.333873ms: waiting for domain to come up
I0127 13:28:57.843349 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:57.843843 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:57.843877 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:57.843796 531622 retry.go:31] will retry after 261.244379ms: waiting for domain to come up
I0127 13:28:58.107236 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:58.107847 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:58.107885 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.107815 531622 retry.go:31] will retry after 367.467141ms: waiting for domain to come up
I0127 13:28:58.477662 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:58.478416 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:58.478454 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.478385 531622 retry.go:31] will retry after 466.451127ms: waiting for domain to come up
I0127 13:28:58.946239 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:58.946809 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:58.946854 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:58.946766 531622 retry.go:31] will retry after 559.614953ms: waiting for domain to come up
I0127 13:28:59.507817 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:28:59.508251 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:28:59.508317 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:28:59.508231 531622 retry.go:31] will retry after 651.013274ms: waiting for domain to come up
I0127 13:29:00.161338 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:00.161916 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:00.161944 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.161879 531622 retry.go:31] will retry after 780.526485ms: waiting for domain to come up
I0127 13:29:00.944251 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:00.944845 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:00.944875 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:00.944817 531622 retry.go:31] will retry after 1.304098s: waiting for domain to come up
I0127 13:28:59.808090 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:01.808480 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:04.273698 529251 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 13:29:04.273779 529251 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 13:29:04.273879 529251 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 13:29:04.274011 529251 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 13:29:04.274137 529251 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 13:29:04.274229 529251 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 13:29:04.275837 529251 out.go:235] - Generating certificates and keys ...
I0127 13:29:04.275953 529251 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 13:29:04.276042 529251 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 13:29:04.276162 529251 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 13:29:04.276253 529251 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 13:29:04.276359 529251 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 13:29:04.276440 529251 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 13:29:04.276535 529251 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 13:29:04.276675 529251 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 13:29:04.276764 529251 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 13:29:04.276906 529251 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 13:29:04.276967 529251 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 13:29:04.277065 529251 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 13:29:04.277113 529251 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 13:29:04.277186 529251 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 13:29:04.277274 529251 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 13:29:04.277381 529251 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 13:29:04.277460 529251 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 13:29:04.277559 529251 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 13:29:04.277647 529251 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 13:29:04.280280 529251 out.go:235] - Booting up control plane ...
I0127 13:29:04.280412 529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 13:29:04.280494 529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 13:29:04.280588 529251 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 13:29:04.280708 529251 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 13:29:04.280854 529251 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 13:29:04.280919 529251 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 13:29:04.281101 529251 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 13:29:04.281252 529251 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 13:29:04.281343 529251 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002900104s
I0127 13:29:04.281472 529251 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 13:29:04.281557 529251 kubeadm.go:310] [api-check] The API server is healthy after 5.001737119s
I0127 13:29:04.281687 529251 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 13:29:04.281880 529251 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 13:29:04.281947 529251 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 13:29:04.282181 529251 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-766944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 13:29:04.282286 529251 kubeadm.go:310] [bootstrap-token] Using token: cubj1b.pwpdo0hgbjp08kat
I0127 13:29:04.283697 529251 out.go:235] - Configuring RBAC rules ...
I0127 13:29:04.283851 529251 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 13:29:04.283970 529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 13:29:04.284120 529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 13:29:04.284293 529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 13:29:04.284399 529251 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 13:29:04.284473 529251 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 13:29:04.284576 529251 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 13:29:04.284615 529251 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 13:29:04.284679 529251 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 13:29:04.284689 529251 kubeadm.go:310]
I0127 13:29:04.284780 529251 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 13:29:04.284794 529251 kubeadm.go:310]
I0127 13:29:04.284891 529251 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 13:29:04.284900 529251 kubeadm.go:310]
I0127 13:29:04.284950 529251 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 13:29:04.285047 529251 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 13:29:04.285134 529251 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 13:29:04.285146 529251 kubeadm.go:310]
I0127 13:29:04.285267 529251 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 13:29:04.285279 529251 kubeadm.go:310]
I0127 13:29:04.285341 529251 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 13:29:04.285356 529251 kubeadm.go:310]
I0127 13:29:04.285410 529251 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 13:29:04.285478 529251 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 13:29:04.285536 529251 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 13:29:04.285542 529251 kubeadm.go:310]
I0127 13:29:04.285636 529251 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 13:29:04.285723 529251 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 13:29:04.285731 529251 kubeadm.go:310]
I0127 13:29:04.285803 529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
I0127 13:29:04.285958 529251 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
I0127 13:29:04.285997 529251 kubeadm.go:310] --control-plane
I0127 13:29:04.286004 529251 kubeadm.go:310]
I0127 13:29:04.286115 529251 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 13:29:04.286121 529251 kubeadm.go:310]
I0127 13:29:04.286247 529251 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cubj1b.pwpdo0hgbjp08kat \
I0127 13:29:04.286407 529251 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2
I0127 13:29:04.286424 529251 cni.go:84] Creating CNI manager for ""
I0127 13:29:04.286436 529251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:29:04.288049 529251 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 13:29:02.250183 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:02.250724 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:02.250759 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:02.250691 531622 retry.go:31] will retry after 1.464046224s: waiting for domain to come up
I0127 13:29:03.716441 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:03.716968 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:03.716995 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:03.716911 531622 retry.go:31] will retry after 1.473613486s: waiting for domain to come up
I0127 13:29:05.192629 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:05.193220 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:05.193256 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:05.193184 531622 retry.go:31] will retry after 1.906374841s: waiting for domain to come up
I0127 13:29:04.289218 529251 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 13:29:04.306228 529251 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 13:29:04.327835 529251 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 13:29:04.328008 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:04.328068 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-766944 minikube.k8s.io/updated_at=2025_01_27T13_29_04_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-766944 minikube.k8s.io/primary=true
I0127 13:29:04.340778 529251 ops.go:34] apiserver oom_adj: -16
I0127 13:29:04.617241 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:05.117682 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:05.618141 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:06.117679 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:06.618036 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:07.118302 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:07.618303 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:08.117464 529251 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:08.221604 529251 kubeadm.go:1113] duration metric: took 3.893670046s to wait for elevateKubeSystemPrivileges
I0127 13:29:08.221659 529251 kubeadm.go:394] duration metric: took 4m36.506709461s to StartCluster
I0127 13:29:08.221687 529251 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:08.221784 529251 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:29:08.223152 529251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:08.223468 529251 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:29:08.223561 529251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 13:29:08.223686 529251 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-766944"
I0127 13:29:08.223707 529251 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-766944"
W0127 13:29:08.223715 529251 addons.go:247] addon storage-provisioner should already be in state true
I0127 13:29:08.223720 529251 addons.go:69] Setting default-storageclass=true in profile "embed-certs-766944"
I0127 13:29:08.223775 529251 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-766944"
I0127 13:29:08.223766 529251 addons.go:69] Setting dashboard=true in profile "embed-certs-766944"
I0127 13:29:08.223766 529251 addons.go:69] Setting metrics-server=true in profile "embed-certs-766944"
I0127 13:29:08.223788 529251 config.go:182] Loaded profile config "embed-certs-766944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:29:08.223797 529251 addons.go:238] Setting addon dashboard=true in "embed-certs-766944"
I0127 13:29:08.223800 529251 addons.go:238] Setting addon metrics-server=true in "embed-certs-766944"
W0127 13:29:08.223808 529251 addons.go:247] addon metrics-server should already be in state true
W0127 13:29:08.223808 529251 addons.go:247] addon dashboard should already be in state true
I0127 13:29:08.223748 529251 host.go:66] Checking if "embed-certs-766944" exists ...
I0127 13:29:08.223840 529251 host.go:66] Checking if "embed-certs-766944" exists ...
I0127 13:29:08.223862 529251 host.go:66] Checking if "embed-certs-766944" exists ...
I0127 13:29:08.224260 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.224276 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.224288 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.224294 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.224311 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.224322 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.224276 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.224390 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.225260 529251 out.go:177] * Verifying Kubernetes components...
I0127 13:29:08.226552 529251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:29:08.244300 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
I0127 13:29:08.244514 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
I0127 13:29:08.244516 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
I0127 13:29:08.245012 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.245254 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.245333 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.245603 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.245621 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.245769 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.245780 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.245787 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.245804 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.246187 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.246236 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.246240 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.246450 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
I0127 13:29:08.246858 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.246858 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.246898 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.246908 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.246957 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
I0127 13:29:08.247392 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.248029 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.248055 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.248479 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.249163 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.249212 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.251401 529251 addons.go:238] Setting addon default-storageclass=true in "embed-certs-766944"
W0127 13:29:08.251426 529251 addons.go:247] addon default-storageclass should already be in state true
I0127 13:29:08.251459 529251 host.go:66] Checking if "embed-certs-766944" exists ...
I0127 13:29:08.251834 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.251888 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.268388 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
I0127 13:29:08.268957 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.269472 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.269488 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.269556 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
I0127 13:29:08.269902 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.270014 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.270112 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
I0127 13:29:08.270466 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.270483 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.270877 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.271178 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
I0127 13:29:08.272419 529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
I0127 13:29:08.273919 529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
I0127 13:29:08.274603 529251 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 13:29:08.275601 529251 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 13:29:08.276632 529251 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 13:29:08.276650 529251 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 13:29:08.276675 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
I0127 13:29:08.277578 529251 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:08.277591 529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 13:29:08.277605 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
I0127 13:29:08.278681 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
I0127 13:29:08.279322 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.280065 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.280083 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.280587 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.280859 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
I0127 13:29:08.282532 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.282997 529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
I0127 13:29:08.283505 529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
I0127 13:29:08.283533 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.283908 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
I0127 13:29:08.284083 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
I0127 13:29:08.284241 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
I0127 13:29:08.284285 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.284416 529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
I0127 13:29:08.284808 529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
I0127 13:29:08.284841 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.284853 529251 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 13:29:03.808549 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:05.809379 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:08.287154 529251 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 13:29:08.287385 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
I0127 13:29:08.287589 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
I0127 13:29:08.287760 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
I0127 13:29:08.287917 529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
I0127 13:29:08.288316 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 13:29:08.288338 529251 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 13:29:08.288353 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
I0127 13:29:08.292370 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.292819 529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
I0127 13:29:08.292844 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.293148 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
I0127 13:29:08.293268 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
I0127 13:29:08.293441 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
I0127 13:29:08.293632 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
I0127 13:29:08.293671 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.293763 529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
I0127 13:29:08.294180 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.294204 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.294614 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.295134 529251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:08.295170 529251 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:08.312630 529251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
I0127 13:29:08.313201 529251 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:08.314043 529251 main.go:141] libmachine: Using API Version 1
I0127 13:29:08.314071 529251 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:08.315352 529251 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:08.315586 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetState
I0127 13:29:08.317764 529251 main.go:141] libmachine: (embed-certs-766944) Calling .DriverName
I0127 13:29:08.318043 529251 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:08.318064 529251 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 13:29:08.318087 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHHostname
I0127 13:29:08.321585 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.322028 529251 main.go:141] libmachine: (embed-certs-766944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:4a:1b", ip: ""} in network mk-embed-certs-766944: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:21 +0000 UTC Type:0 Mac:52:54:00:73:4a:1b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:embed-certs-766944 Clientid:01:52:54:00:73:4a:1b}
I0127 13:29:08.322057 529251 main.go:141] libmachine: (embed-certs-766944) DBG | domain embed-certs-766944 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:4a:1b in network mk-embed-certs-766944
I0127 13:29:08.322200 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHPort
I0127 13:29:08.322476 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHKeyPath
I0127 13:29:08.322607 529251 main.go:141] libmachine: (embed-certs-766944) Calling .GetSSHUsername
I0127 13:29:08.322797 529251 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/embed-certs-766944/id_rsa Username:docker}
I0127 13:29:08.543349 529251 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:29:08.566526 529251 node_ready.go:35] waiting up to 6m0s for node "embed-certs-766944" to be "Ready" ...
I0127 13:29:08.581029 529251 node_ready.go:49] node "embed-certs-766944" has status "Ready":"True"
I0127 13:29:08.581058 529251 node_ready.go:38] duration metric: took 14.437055ms for node "embed-certs-766944" to be "Ready" ...
I0127 13:29:08.581072 529251 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:29:08.591111 529251 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:08.663492 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 13:29:08.663529 529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 13:29:08.708763 529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:08.731924 529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:08.733763 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 13:29:08.733792 529251 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 13:29:08.816600 529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 13:29:08.816646 529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 13:29:08.862311 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 13:29:08.862346 529251 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 13:29:08.881791 529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 13:29:08.881830 529251 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 13:29:08.965427 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 13:29:08.965468 529251 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 13:29:09.025682 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 13:29:09.025718 529251 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 13:29:09.026871 529251 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:09.026896 529251 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 13:29:09.106376 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 13:29:09.106408 529251 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 13:29:09.173153 529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:07.101069 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:07.101691 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:07.101724 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:07.101645 531622 retry.go:31] will retry after 3.3503886s: waiting for domain to come up
I0127 13:29:10.454092 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:10.454611 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:10.454643 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:10.454550 531622 retry.go:31] will retry after 2.977667559s: waiting for domain to come up
I0127 13:29:09.316157 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 13:29:09.316202 529251 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 13:29:09.518415 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 13:29:09.518455 529251 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 13:29:09.836886 529251 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:09.836931 529251 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 13:29:09.974913 529251 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:10.529287 529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.820478856s)
I0127 13:29:10.529346 529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.797380034s)
I0127 13:29:10.529398 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:10.529415 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:10.529355 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:10.529488 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:10.529871 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:10.529910 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:10.529932 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:10.529943 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:10.529951 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:10.529878 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:10.530045 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:10.530070 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:10.530088 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:10.530265 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:10.530268 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:10.530299 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:10.530463 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:10.530482 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:10.599533 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:10.599626 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:10.599978 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:10.600095 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:10.600128 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:10.613397 529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:11.025503 529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.852294623s)
I0127 13:29:11.025583 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:11.025598 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:11.025974 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:11.026056 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:11.026072 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:11.026081 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:11.026094 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:11.026369 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:11.026430 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:11.026446 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:11.026465 529251 addons.go:479] Verifying addon metrics-server=true in "embed-certs-766944"
I0127 13:29:11.846156 529251 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871176785s)
I0127 13:29:11.846235 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:11.846258 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:11.846647 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:11.846693 529251 main.go:141] libmachine: (embed-certs-766944) DBG | Closing plugin on server side
I0127 13:29:11.846706 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:11.846720 529251 main.go:141] libmachine: Making call to close driver server
I0127 13:29:11.846730 529251 main.go:141] libmachine: (embed-certs-766944) Calling .Close
I0127 13:29:11.847020 529251 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:11.847069 529251 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:11.849004 529251 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-766944 addons enable metrics-server
I0127 13:29:11.850858 529251 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 13:29:08.309241 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:10.806393 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:12.808038 529417 pod_ready.go:103] pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:11.852345 529251 addons.go:514] duration metric: took 3.628795827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 13:29:13.097655 529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:13.433798 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:13.434282 531586 main.go:141] libmachine: (newest-cni-296225) DBG | unable to find current IP address of domain newest-cni-296225 in network mk-newest-cni-296225
I0127 13:29:13.434324 531586 main.go:141] libmachine: (newest-cni-296225) DBG | I0127 13:29:13.434271 531622 retry.go:31] will retry after 5.418420331s: waiting for domain to come up
I0127 13:29:14.300254 529417 pod_ready.go:82] duration metric: took 4m0.000130065s for pod "metrics-server-f79f97bbb-l56jp" in "kube-system" namespace to be "Ready" ...
E0127 13:29:14.300291 529417 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0127 13:29:14.300324 529417 pod_ready.go:39] duration metric: took 4m12.210910321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:29:14.300355 529417 kubeadm.go:597] duration metric: took 4m20.336267253s to restartPrimaryControlPlane
W0127 13:29:14.300420 529417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 13:29:14.300449 529417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 13:29:16.335301 529417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.034816955s)
I0127 13:29:16.335395 529417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 13:29:16.352998 529417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 13:29:16.365092 529417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 13:29:16.378733 529417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 13:29:16.378758 529417 kubeadm.go:157] found existing configuration files:
I0127 13:29:16.378804 529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
I0127 13:29:16.395924 529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 13:29:16.396005 529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 13:29:16.408496 529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
I0127 13:29:16.418917 529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 13:29:16.418986 529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 13:29:16.429065 529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
I0127 13:29:16.439234 529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 13:29:16.439333 529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 13:29:16.449865 529417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
I0127 13:29:16.460738 529417 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 13:29:16.460831 529417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 13:29:16.472411 529417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 13:29:16.642625 529417 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 13:29:15.100860 529251 pod_ready.go:103] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:16.102026 529251 pod_ready.go:93] pod "etcd-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:16.102064 529251 pod_ready.go:82] duration metric: took 7.510920671s for pod "etcd-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.102080 529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.108782 529251 pod_ready.go:93] pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:16.108818 529251 pod_ready.go:82] duration metric: took 6.727536ms for pod "kube-apiserver-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.108832 529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.117964 529251 pod_ready.go:93] pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:16.117994 529251 pod_ready.go:82] duration metric: took 9.151947ms for pod "kube-controller-manager-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.118008 529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.125633 529251 pod_ready.go:93] pod "kube-proxy-vp88s" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:16.125657 529251 pod_ready.go:82] duration metric: took 7.641622ms for pod "kube-proxy-vp88s" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.125667 529251 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.141368 529251 pod_ready.go:93] pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:16.141395 529251 pod_ready.go:82] duration metric: took 15.721182ms for pod "kube-scheduler-embed-certs-766944" in "kube-system" namespace to be "Ready" ...
I0127 13:29:16.141403 529251 pod_ready.go:39] duration metric: took 7.560318089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:29:16.141421 529251 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:29:16.141484 529251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:16.168318 529251 api_server.go:72] duration metric: took 7.944806249s to wait for apiserver process to appear ...
I0127 13:29:16.168353 529251 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:29:16.168382 529251 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
I0127 13:29:16.178242 529251 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
ok
I0127 13:29:16.179663 529251 api_server.go:141] control plane version: v1.32.1
I0127 13:29:16.179696 529251 api_server.go:131] duration metric: took 11.33324ms to wait for apiserver health ...
I0127 13:29:16.179706 529251 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:29:16.299895 529251 system_pods.go:59] 9 kube-system pods found
I0127 13:29:16.299927 529251 system_pods.go:61] "coredns-668d6bf9bc-9h4k2" [0eb84d56-e399-4808-afda-b0e1ec4f201f] Running
I0127 13:29:16.299933 529251 system_pods.go:61] "coredns-668d6bf9bc-wf444" [7afc402e-ab81-4eb5-b2cf-08be738f171d] Running
I0127 13:29:16.299937 529251 system_pods.go:61] "etcd-embed-certs-766944" [22be64ef-9ba9-4750-aca9-f34b01b46f16] Running
I0127 13:29:16.299941 529251 system_pods.go:61] "kube-apiserver-embed-certs-766944" [397082cc-acad-493c-8ddd-9f49def9100a] Running
I0127 13:29:16.299945 529251 system_pods.go:61] "kube-controller-manager-embed-certs-766944" [fe84cf8b-7074-485b-a16e-d75b52b9fe15] Running
I0127 13:29:16.299948 529251 system_pods.go:61] "kube-proxy-vp88s" [18e5bf87-73fb-43c4-a73e-b2f21a1bb7b8] Running
I0127 13:29:16.299951 529251 system_pods.go:61] "kube-scheduler-embed-certs-766944" [96587dc6-6fbd-4d22-acfa-09a89f1e711a] Running
I0127 13:29:16.299956 529251 system_pods.go:61] "metrics-server-f79f97bbb-27dz9" [9f604bd3-a953-4a12-b1bc-48e4e4c8bb4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:29:16.299962 529251 system_pods.go:61] "storage-provisioner" [7d91f3a9-4b10-40fa-84bc-9d881d955319] Running
I0127 13:29:16.299973 529251 system_pods.go:74] duration metric: took 120.259661ms to wait for pod list to return data ...
I0127 13:29:16.299984 529251 default_sa.go:34] waiting for default service account to be created ...
I0127 13:29:16.496603 529251 default_sa.go:45] found service account: "default"
I0127 13:29:16.496645 529251 default_sa.go:55] duration metric: took 196.6512ms for default service account to be created ...
I0127 13:29:16.496658 529251 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 13:29:16.702376 529251 system_pods.go:87] 9 kube-system pods found
I0127 13:29:18.854257 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:18.854914 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has current primary IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:18.854944 531586 main.go:141] libmachine: (newest-cni-296225) found domain IP: 192.168.72.46
I0127 13:29:18.854956 531586 main.go:141] libmachine: (newest-cni-296225) reserving static IP address...
I0127 13:29:18.855436 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:18.855466 531586 main.go:141] libmachine: (newest-cni-296225) DBG | skip adding static IP to network mk-newest-cni-296225 - found existing host DHCP lease matching {name: "newest-cni-296225", mac: "52:54:00:25:60:c9", ip: "192.168.72.46"}
I0127 13:29:18.855480 531586 main.go:141] libmachine: (newest-cni-296225) reserved static IP address 192.168.72.46 for domain newest-cni-296225
I0127 13:29:18.855493 531586 main.go:141] libmachine: (newest-cni-296225) waiting for SSH...
I0127 13:29:18.855509 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Getting to WaitForSSH function...
I0127 13:29:18.858091 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:18.858477 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:18.858507 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:18.858705 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH client type: external
I0127 13:29:18.858725 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa (-rw-------)
I0127 13:29:18.858760 531586 main.go:141] libmachine: (newest-cni-296225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 13:29:18.858784 531586 main.go:141] libmachine: (newest-cni-296225) DBG | About to run SSH command:
I0127 13:29:18.858806 531586 main.go:141] libmachine: (newest-cni-296225) DBG | exit 0
I0127 13:29:18.996896 531586 main.go:141] libmachine: (newest-cni-296225) DBG | SSH cmd err, output: <nil>:
I0127 13:29:18.997263 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetConfigRaw
I0127 13:29:18.998035 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
I0127 13:29:19.001537 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.001980 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.002005 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.002524 531586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/config.json ...
I0127 13:29:19.002778 531586 machine.go:93] provisionDockerMachine start ...
I0127 13:29:19.002804 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:19.003111 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.006300 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.006759 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.006788 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.006991 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.007221 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.007434 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.007600 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.007802 531586 main.go:141] libmachine: Using SSH client type: native
I0127 13:29:19.008050 531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.46 22 <nil> <nil>}
I0127 13:29:19.008068 531586 main.go:141] libmachine: About to run SSH command:
hostname
I0127 13:29:19.124549 531586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 13:29:19.124589 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
I0127 13:29:19.124921 531586 buildroot.go:166] provisioning hostname "newest-cni-296225"
I0127 13:29:19.124953 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
I0127 13:29:19.125168 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.128509 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.128870 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.128904 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.129136 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.129338 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.129489 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.129682 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.129915 531586 main.go:141] libmachine: Using SSH client type: native
I0127 13:29:19.130181 531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.46 22 <nil> <nil>}
I0127 13:29:19.130202 531586 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-296225 && echo "newest-cni-296225" | sudo tee /etc/hostname
I0127 13:29:19.274181 531586 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-296225
I0127 13:29:19.274233 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.277975 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.278540 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.278575 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.278963 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.279243 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.279514 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.279686 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.279898 531586 main.go:141] libmachine: Using SSH client type: native
I0127 13:29:19.280149 531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.46 22 <nil> <nil>}
I0127 13:29:19.280176 531586 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-296225' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-296225/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-296225' | sudo tee -a /etc/hosts;
fi
fi
I0127 13:29:19.425977 531586 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 13:29:19.426016 531586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-466901/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-466901/.minikube}
I0127 13:29:19.426066 531586 buildroot.go:174] setting up certificates
I0127 13:29:19.426080 531586 provision.go:84] configureAuth start
I0127 13:29:19.426092 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetMachineName
I0127 13:29:19.426372 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
I0127 13:29:19.429756 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.430201 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.430230 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.430467 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.432982 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.433352 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.433381 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.433508 531586 provision.go:143] copyHostCerts
I0127 13:29:19.433596 531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem, removing ...
I0127 13:29:19.433613 531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem
I0127 13:29:19.433713 531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/ca.pem (1082 bytes)
I0127 13:29:19.433862 531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem, removing ...
I0127 13:29:19.433898 531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem
I0127 13:29:19.433952 531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/cert.pem (1123 bytes)
I0127 13:29:19.434069 531586 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem, removing ...
I0127 13:29:19.434083 531586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem
I0127 13:29:19.434121 531586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-466901/.minikube/key.pem (1675 bytes)
I0127 13:29:19.434225 531586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem org=jenkins.newest-cni-296225 san=[127.0.0.1 192.168.72.46 localhost minikube newest-cni-296225]
I0127 13:29:19.616134 531586 provision.go:177] copyRemoteCerts
I0127 13:29:19.616230 531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 13:29:19.616268 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.619632 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.620115 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.620170 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.620627 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.620882 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.621062 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.621267 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:19.716453 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 13:29:19.751558 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 13:29:19.787164 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 13:29:19.822729 531586 provision.go:87] duration metric: took 396.632166ms to configureAuth
I0127 13:29:19.822766 531586 buildroot.go:189] setting minikube options for container-runtime
I0127 13:29:19.823021 531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:29:19.823035 531586 machine.go:96] duration metric: took 820.241874ms to provisionDockerMachine
I0127 13:29:19.823044 531586 start.go:293] postStartSetup for "newest-cni-296225" (driver="kvm2")
I0127 13:29:19.823074 531586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 13:29:19.823125 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:19.823524 531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 13:29:19.823610 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.826416 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.826837 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.826869 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.827189 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.827424 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.827641 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.827800 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:19.922618 531586 ssh_runner.go:195] Run: cat /etc/os-release
I0127 13:29:19.927700 531586 info.go:137] Remote host: Buildroot 2023.02.9
I0127 13:29:19.927740 531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/addons for local assets ...
I0127 13:29:19.927820 531586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-466901/.minikube/files for local assets ...
I0127 13:29:19.927920 531586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem -> 4742752.pem in /etc/ssl/certs
I0127 13:29:19.928047 531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 13:29:19.940393 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /etc/ssl/certs/4742752.pem (1708 bytes)
I0127 13:29:19.970138 531586 start.go:296] duration metric: took 147.059526ms for postStartSetup
I0127 13:29:19.970186 531586 fix.go:56] duration metric: took 23.687879815s for fixHost
I0127 13:29:19.970213 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:19.973696 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.974136 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:19.974162 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:19.974433 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:19.974671 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.974863 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:19.975000 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:19.975177 531586 main.go:141] libmachine: Using SSH client type: native
I0127 13:29:19.975406 531586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.46 22 <nil> <nil>}
I0127 13:29:19.975421 531586 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 13:29:20.097158 531586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984560.051374432
I0127 13:29:20.097195 531586 fix.go:216] guest clock: 1737984560.051374432
I0127 13:29:20.097205 531586 fix.go:229] Guest: 2025-01-27 13:29:20.051374432 +0000 UTC Remote: 2025-01-27 13:29:19.970191951 +0000 UTC m=+23.842107580 (delta=81.182481ms)
I0127 13:29:20.097251 531586 fix.go:200] guest clock delta is within tolerance: 81.182481ms
I0127 13:29:20.097264 531586 start.go:83] releasing machines lock for "newest-cni-296225", held for 23.814976228s
I0127 13:29:20.097302 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:20.097604 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
I0127 13:29:20.101191 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.101642 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:20.101693 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.101991 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:20.102587 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:20.102797 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:20.102930 531586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 13:29:20.102980 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:20.103025 531586 ssh_runner.go:195] Run: cat /version.json
I0127 13:29:20.103054 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:20.106331 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.106785 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:20.106843 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.106883 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.107100 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:20.107355 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:20.107415 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:20.107456 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:20.107545 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:20.107711 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:20.107752 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:20.107851 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:20.108004 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:20.108175 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:20.198167 531586 ssh_runner.go:195] Run: systemctl --version
I0127 13:29:20.220547 531586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 13:29:20.228913 531586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 13:29:20.229009 531586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 13:29:20.252220 531586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 13:29:20.252252 531586 start.go:495] detecting cgroup driver to use...
I0127 13:29:20.252336 531586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 13:29:20.290040 531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 13:29:20.307723 531586 docker.go:217] disabling cri-docker service (if available) ...
I0127 13:29:20.307812 531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 13:29:20.323473 531586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 13:29:20.339833 531586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 13:29:20.476188 531586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 13:29:20.632180 531586 docker.go:233] disabling docker service ...
I0127 13:29:20.632272 531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 13:29:20.647480 531586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 13:29:20.662456 531586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 13:29:20.849643 531586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 13:29:21.014719 531586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 13:29:21.034260 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 13:29:21.055949 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 13:29:21.068764 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 13:29:21.083524 531586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 13:29:21.083605 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 13:29:21.098914 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:29:21.113664 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 13:29:21.127826 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 13:29:21.139382 531586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 13:29:21.151342 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 13:29:21.162384 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 13:29:21.174714 531586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 13:29:21.188361 531586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 13:29:21.201837 531586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 13:29:21.201921 531586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 13:29:21.216404 531586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 13:29:21.226169 531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:29:21.347858 531586 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 13:29:21.387449 531586 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 13:29:21.387582 531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 13:29:21.393515 531586 retry.go:31] will retry after 514.05687ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 13:29:21.908225 531586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 13:29:21.917708 531586 start.go:563] Will wait 60s for crictl version
I0127 13:29:21.917786 531586 ssh_runner.go:195] Run: which crictl
I0127 13:29:21.923989 531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 13:29:21.981569 531586 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 13:29:21.981675 531586 ssh_runner.go:195] Run: containerd --version
I0127 13:29:22.027649 531586 ssh_runner.go:195] Run: containerd --version
I0127 13:29:22.060339 531586 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 13:29:22.061787 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetIP
I0127 13:29:22.065481 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:22.065908 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:22.065946 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:22.066183 531586 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0127 13:29:22.070907 531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:29:22.089788 531586 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0127 13:29:25.581414 529417 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 13:29:25.581498 529417 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 13:29:25.581603 529417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 13:29:25.581744 529417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 13:29:25.581857 529417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 13:29:25.581911 529417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 13:29:25.583668 529417 out.go:235] - Generating certificates and keys ...
I0127 13:29:25.583784 529417 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 13:29:25.583864 529417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 13:29:25.583999 529417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 13:29:25.584094 529417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 13:29:25.584212 529417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 13:29:25.584290 529417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 13:29:25.584368 529417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 13:29:25.584490 529417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 13:29:25.584607 529417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 13:29:25.584736 529417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 13:29:25.584797 529417 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 13:29:25.584859 529417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 13:29:25.584911 529417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 13:29:25.584981 529417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 13:29:25.585070 529417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 13:29:25.585182 529417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 13:29:25.585291 529417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 13:29:25.585425 529417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 13:29:25.585505 529417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 13:29:25.587922 529417 out.go:235] - Booting up control plane ...
I0127 13:29:25.588008 529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 13:29:25.588109 529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 13:29:25.588212 529417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 13:29:25.588306 529417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 13:29:25.588407 529417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 13:29:25.588476 529417 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 13:29:25.588653 529417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 13:29:25.588744 529417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 13:29:25.588806 529417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.424535ms
I0127 13:29:25.588894 529417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 13:29:25.588947 529417 kubeadm.go:310] [api-check] The API server is healthy after 6.003546574s
I0127 13:29:25.589042 529417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 13:29:25.589188 529417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 13:29:25.589243 529417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 13:29:25.589423 529417 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-325510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 13:29:25.589477 529417 kubeadm.go:310] [bootstrap-token] Using token: pmveah.4ebz9u5xjcadsa8l
I0127 13:29:25.590661 529417 out.go:235] - Configuring RBAC rules ...
I0127 13:29:25.590772 529417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 13:29:25.590884 529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 13:29:25.591076 529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 13:29:25.591309 529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 13:29:25.591477 529417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 13:29:25.591601 529417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 13:29:25.591734 529417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 13:29:25.591810 529417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 13:29:25.591869 529417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 13:29:25.591879 529417 kubeadm.go:310]
I0127 13:29:25.591954 529417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 13:29:25.591974 529417 kubeadm.go:310]
I0127 13:29:25.592097 529417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 13:29:25.592115 529417 kubeadm.go:310]
I0127 13:29:25.592151 529417 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 13:29:25.592237 529417 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 13:29:25.592327 529417 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 13:29:25.592337 529417 kubeadm.go:310]
I0127 13:29:25.592390 529417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 13:29:25.592397 529417 kubeadm.go:310]
I0127 13:29:25.592435 529417 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 13:29:25.592439 529417 kubeadm.go:310]
I0127 13:29:25.592512 529417 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 13:29:25.592614 529417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 13:29:25.592674 529417 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 13:29:25.592682 529417 kubeadm.go:310]
I0127 13:29:25.592801 529417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 13:29:25.592928 529417 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 13:29:25.592941 529417 kubeadm.go:310]
I0127 13:29:25.593032 529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
I0127 13:29:25.593158 529417 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2 \
I0127 13:29:25.593193 529417 kubeadm.go:310] --control-plane
I0127 13:29:25.593206 529417 kubeadm.go:310]
I0127 13:29:25.593328 529417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 13:29:25.593347 529417 kubeadm.go:310]
I0127 13:29:25.593453 529417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pmveah.4ebz9u5xjcadsa8l \
I0127 13:29:25.593643 529417 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:44e7ea386d1f8e7ab1336d835156dd84ecca20069390afc88f04bb1a3c629fd2
I0127 13:29:25.593663 529417 cni.go:84] Creating CNI manager for ""
I0127 13:29:25.593674 529417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:29:25.595331 529417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 13:29:22.091203 531586 kubeadm.go:883] updating cluster {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 13:29:22.091437 531586 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 13:29:22.091524 531586 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:29:22.133513 531586 containerd.go:627] all images are preloaded for containerd runtime.
I0127 13:29:22.133543 531586 containerd.go:534] Images already preloaded, skipping extraction
I0127 13:29:22.133614 531586 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:29:22.172620 531586 containerd.go:627] all images are preloaded for containerd runtime.
I0127 13:29:22.172654 531586 cache_images.go:84] Images are preloaded, skipping loading
I0127 13:29:22.172666 531586 kubeadm.go:934] updating node { 192.168.72.46 8443 v1.32.1 containerd true true} ...
I0127 13:29:22.172814 531586 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-296225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.46
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 13:29:22.172904 531586 ssh_runner.go:195] Run: sudo crictl info
I0127 13:29:22.221421 531586 cni.go:84] Creating CNI manager for ""
I0127 13:29:22.221446 531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:29:22.221457 531586 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0127 13:29:22.221483 531586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.46 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-296225 NodeName:newest-cni-296225 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 13:29:22.221619 531586 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.46
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-296225"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.46"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.46"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 13:29:22.221696 531586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 13:29:22.233206 531586 binaries.go:44] Found k8s binaries, skipping transfer
I0127 13:29:22.233298 531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 13:29:22.247498 531586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0127 13:29:22.265563 531586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 13:29:22.283377 531586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0127 13:29:22.304627 531586 ssh_runner.go:195] Run: grep 192.168.72.46 control-plane.minikube.internal$ /etc/hosts
I0127 13:29:22.310093 531586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.46 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 13:29:22.328149 531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:29:22.474894 531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:29:22.498792 531586 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225 for IP: 192.168.72.46
I0127 13:29:22.498819 531586 certs.go:194] generating shared ca certs ...
I0127 13:29:22.498848 531586 certs.go:226] acquiring lock for ca certs: {Name:mk60f2aac78eb363c5e06a00675357d94c0df88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:22.499080 531586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key
I0127 13:29:22.499144 531586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key
I0127 13:29:22.499160 531586 certs.go:256] generating profile certs ...
I0127 13:29:22.499295 531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/client.key
I0127 13:29:22.499368 531586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key.1b824597
I0127 13:29:22.499428 531586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key
I0127 13:29:22.499576 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem (1338 bytes)
W0127 13:29:22.499617 531586 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275_empty.pem, impossibly tiny 0 bytes
I0127 13:29:22.499632 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca-key.pem (1679 bytes)
I0127 13:29:22.499663 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/ca.pem (1082 bytes)
I0127 13:29:22.499700 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/cert.pem (1123 bytes)
I0127 13:29:22.499734 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/certs/key.pem (1675 bytes)
I0127 13:29:22.499790 531586 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem (1708 bytes)
I0127 13:29:22.500650 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 13:29:22.551481 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 13:29:22.590593 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 13:29:22.630918 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 13:29:22.660478 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 13:29:22.696686 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 13:29:22.724193 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 13:29:22.752949 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/profiles/newest-cni-296225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 13:29:22.784814 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 13:29:22.812321 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/certs/474275.pem --> /usr/share/ca-certificates/474275.pem (1338 bytes)
I0127 13:29:22.842249 531586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-466901/.minikube/files/etc/ssl/certs/4742752.pem --> /usr/share/ca-certificates/4742752.pem (1708 bytes)
I0127 13:29:22.872391 531586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 13:29:22.898310 531586 ssh_runner.go:195] Run: openssl version
I0127 13:29:22.905518 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 13:29:22.917623 531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 13:29:22.922904 531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:10 /usr/share/ca-certificates/minikubeCA.pem
I0127 13:29:22.922982 531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 13:29:22.929666 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 13:29:22.941982 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/474275.pem && ln -fs /usr/share/ca-certificates/474275.pem /etc/ssl/certs/474275.pem"
I0127 13:29:22.955315 531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/474275.pem
I0127 13:29:22.962079 531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:18 /usr/share/ca-certificates/474275.pem
I0127 13:29:22.962157 531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/474275.pem
I0127 13:29:22.971599 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/474275.pem /etc/ssl/certs/51391683.0"
I0127 13:29:22.985012 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4742752.pem && ln -fs /usr/share/ca-certificates/4742752.pem /etc/ssl/certs/4742752.pem"
I0127 13:29:22.998788 531586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4742752.pem
I0127 13:29:23.005232 531586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:18 /usr/share/ca-certificates/4742752.pem
I0127 13:29:23.005312 531586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4742752.pem
I0127 13:29:23.013471 531586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4742752.pem /etc/ssl/certs/3ec20f2e.0"
I0127 13:29:23.028126 531586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 13:29:23.033971 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 13:29:23.041089 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 13:29:23.048533 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 13:29:23.056641 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 13:29:23.065453 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 13:29:23.074452 531586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 13:29:23.083360 531586 kubeadm.go:392] StartCluster: {Name:newest-cni-296225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-296225 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 13:29:23.083511 531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 13:29:23.083604 531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 13:29:23.138902 531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
I0127 13:29:23.138937 531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
I0127 13:29:23.138941 531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
I0127 13:29:23.138945 531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
I0127 13:29:23.138947 531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
I0127 13:29:23.138952 531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
I0127 13:29:23.138955 531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
I0127 13:29:23.138958 531586 cri.go:89] found id: ""
I0127 13:29:23.139005 531586 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 13:29:23.161523 531586 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T13:29:23Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 13:29:23.161644 531586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 13:29:23.177352 531586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 13:29:23.177377 531586 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 13:29:23.177436 531586 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 13:29:23.190684 531586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 13:29:23.191837 531586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-296225" does not appear in /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:29:23.192568 531586 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-466901/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-296225" cluster setting kubeconfig missing "newest-cni-296225" context setting]
I0127 13:29:23.193462 531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:23.195884 531586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 13:29:23.210992 531586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.46
I0127 13:29:23.211040 531586 kubeadm.go:1160] stopping kube-system containers ...
I0127 13:29:23.211058 531586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 13:29:23.211141 531586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 13:29:23.266429 531586 cri.go:89] found id: "d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170"
I0127 13:29:23.266458 531586 cri.go:89] found id: "b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46"
I0127 13:29:23.266464 531586 cri.go:89] found id: "ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f"
I0127 13:29:23.266468 531586 cri.go:89] found id: "e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67"
I0127 13:29:23.266472 531586 cri.go:89] found id: "7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1"
I0127 13:29:23.266477 531586 cri.go:89] found id: "5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc"
I0127 13:29:23.266481 531586 cri.go:89] found id: "2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b"
I0127 13:29:23.266485 531586 cri.go:89] found id: ""
I0127 13:29:23.266492 531586 cri.go:252] Stopping containers: [d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b]
I0127 13:29:23.266560 531586 ssh_runner.go:195] Run: which crictl
I0127 13:29:23.272382 531586 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d766c6e246b1ecdf40d93c7468225dc4fad90ed651b6fe2c936d5fcc3d3d5170 b555d58cf5206b0a6fd83f0168cfb804792c0d83433705e5ac60118320fece46 ab62389a943f48c8078929c4529e496d71a5839cc3224de20672cab59cf3d31f e035113c19405073ac2218fc9137ccfb808c5e6b9a0a15344c76d9b3e648cf67 7005555f3a67b3371c40da7f69569f4070f3d54977562479fd46b12e40341ee1 5ff6602b0f8a4cd5e4cb51ca77e920e00b1c9e20d02131be56addb081e9027cc 2614ff10025bce8287c075ab3139b2d06632fdb7cd672a7be31fbff64ffdea9b
I0127 13:29:23.324924 531586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 13:29:23.345385 531586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 13:29:23.359679 531586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 13:29:23.359712 531586 kubeadm.go:157] found existing configuration files:
I0127 13:29:23.359774 531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 13:29:23.371542 531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 13:29:23.371634 531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 13:29:23.383083 531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 13:29:23.393186 531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 13:29:23.393267 531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 13:29:23.406589 531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 13:29:23.417348 531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 13:29:23.417444 531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 13:29:23.430008 531586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 13:29:23.441860 531586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 13:29:23.441965 531586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 13:29:23.452352 531586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 13:29:23.463556 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:23.634151 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:24.791692 531586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.15748875s)
I0127 13:29:24.791732 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:25.027708 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:25.110706 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:25.211743 531586 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:29:25.211882 531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:25.712041 531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:25.596457 529417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 13:29:25.611060 529417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 13:29:25.631563 529417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 13:29:25.631668 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:25.631709 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-325510 minikube.k8s.io/updated_at=2025_01_27T13_29_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=default-k8s-diff-port-325510 minikube.k8s.io/primary=true
I0127 13:29:25.654141 529417 ops.go:34] apiserver oom_adj: -16
I0127 13:29:25.885770 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:26.386140 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:26.885887 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:27.386520 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:27.886746 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:28.386093 529417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 13:29:28.523381 529417 kubeadm.go:1113] duration metric: took 2.89179334s to wait for elevateKubeSystemPrivileges
I0127 13:29:28.523431 529417 kubeadm.go:394] duration metric: took 4m34.628614328s to StartCluster
I0127 13:29:28.523462 529417 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:28.523566 529417 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:29:28.526181 529417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:28.526636 529417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:29:28.526773 529417 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 13:29:28.526897 529417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-325510"
I0127 13:29:28.526920 529417 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-325510"
I0127 13:29:28.526920 529417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-325510"
W0127 13:29:28.526930 529417 addons.go:247] addon storage-provisioner should already be in state true
I0127 13:29:28.526943 529417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-325510"
I0127 13:29:28.526965 529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
I0127 13:29:28.527036 529417 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-325510"
I0127 13:29:28.527054 529417 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-325510"
W0127 13:29:28.527061 529417 addons.go:247] addon dashboard should already be in state true
I0127 13:29:28.527086 529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
I0127 13:29:28.527083 529417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-325510"
I0127 13:29:28.527117 529417 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-325510"
W0127 13:29:28.527128 529417 addons.go:247] addon metrics-server should already be in state true
I0127 13:29:28.527164 529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
I0127 13:29:28.527436 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.527441 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.526898 529417 config.go:182] Loaded profile config "default-k8s-diff-port-325510": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:29:28.527475 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.527490 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.527619 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.527655 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.527667 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.527700 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.528609 529417 out.go:177] * Verifying Kubernetes components...
I0127 13:29:28.530189 529417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:29:28.546697 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
I0127 13:29:28.547331 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.547485 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
I0127 13:29:28.547528 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
I0127 13:29:28.547893 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.548297 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.548482 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.548497 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.548832 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.549020 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
I0127 13:29:28.549338 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.549354 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.549743 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
I0127 13:29:28.549980 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.550227 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.550241 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.550306 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.550880 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.550926 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.551223 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.551394 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.551416 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.551971 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.552001 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.552189 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.552980 529417 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-325510"
W0127 13:29:28.553005 529417 addons.go:247] addon default-storageclass should already be in state true
I0127 13:29:28.553038 529417 host.go:66] Checking if "default-k8s-diff-port-325510" exists ...
I0127 13:29:28.553380 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.553426 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.555977 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.556013 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.572312 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
I0127 13:29:28.573004 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.573598 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.573617 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.573988 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.574040 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
I0127 13:29:28.574171 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
I0127 13:29:28.574508 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
I0127 13:29:28.575096 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.575836 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.576253 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
I0127 13:29:28.576355 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.576375 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.577245 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.577419 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
I0127 13:29:28.579103 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
I0127 13:29:28.579756 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.579779 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.580518 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
I0127 13:29:28.580886 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.581173 529417 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 13:29:28.581406 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.581423 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.581695 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.581855 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
I0127 13:29:28.582619 529417 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 13:29:28.583309 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
I0127 13:29:28.583662 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.584326 529417 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:28.584346 529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 13:29:28.584368 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
I0127 13:29:28.587322 529417 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 13:29:28.587999 529417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:28.588047 529417 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:28.591379 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
I0127 13:29:28.591427 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.591456 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
I0127 13:29:28.591496 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.591585 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
I0127 13:29:28.591752 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
I0127 13:29:28.591911 529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
I0127 13:29:28.592584 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 13:29:28.592601 529417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 13:29:28.592621 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
I0127 13:29:28.593660 529417 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 13:29:26.212209 531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:26.236202 531586 api_server.go:72] duration metric: took 1.024459251s to wait for apiserver process to appear ...
I0127 13:29:26.236238 531586 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:29:26.236266 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:26.236911 531586 api_server.go:269] stopped: https://192.168.72.46:8443/healthz: Get "https://192.168.72.46:8443/healthz": dial tcp 192.168.72.46:8443: connect: connection refused
I0127 13:29:26.737118 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:29.390944 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 13:29:29.390990 531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 13:29:29.391010 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:29.446439 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 13:29:29.446477 531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 13:29:29.737006 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:29.743881 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 13:29:29.743915 531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 13:29:30.237168 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:30.251557 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 13:29:30.251594 531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 13:29:30.737227 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:30.744425 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 13:29:30.744461 531586 api_server.go:103] status: https://192.168.72.46:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 13:29:31.237274 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:31.244159 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
ok
I0127 13:29:31.252139 531586 api_server.go:141] control plane version: v1.32.1
I0127 13:29:31.252182 531586 api_server.go:131] duration metric: took 5.015933408s to wait for apiserver health ...
I0127 13:29:31.252194 531586 cni.go:84] Creating CNI manager for ""
I0127 13:29:31.252203 531586 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 13:29:31.253925 531586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 13:29:31.255434 531586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 13:29:31.267804 531586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 13:29:31.293560 531586 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:29:31.313542 531586 system_pods.go:59] 8 kube-system pods found
I0127 13:29:31.313590 531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 13:29:31.313601 531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 13:29:31.313612 531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 13:29:31.313621 531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 13:29:31.313631 531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 13:29:31.313640 531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 13:29:31.313655 531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:29:31.313671 531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 13:29:31.313680 531586 system_pods.go:74] duration metric: took 20.080673ms to wait for pod list to return data ...
I0127 13:29:31.313709 531586 node_conditions.go:102] verifying NodePressure condition ...
I0127 13:29:31.321205 531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 13:29:31.321236 531586 node_conditions.go:123] node cpu capacity is 2
I0127 13:29:31.321251 531586 node_conditions.go:105] duration metric: took 7.532371ms to run NodePressure ...
I0127 13:29:31.321276 531586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 13:29:31.758136 531586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 13:29:31.783447 531586 ops.go:34] apiserver oom_adj: -16
I0127 13:29:31.783539 531586 kubeadm.go:597] duration metric: took 8.606153189s to restartPrimaryControlPlane
I0127 13:29:31.783582 531586 kubeadm.go:394] duration metric: took 8.700235213s to StartCluster
I0127 13:29:31.783614 531586 settings.go:142] acquiring lock: {Name:mk070ebf22d35da2704f00750921836dbd2cd121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:31.783739 531586 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-466901/kubeconfig
I0127 13:29:31.786536 531586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-466901/kubeconfig: {Name:mkc116eec378af43ea8fefe45e11af3e19be85bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 13:29:31.786926 531586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.46 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 13:29:31.787022 531586 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 13:29:31.787188 531586 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-296225"
I0127 13:29:31.787308 531586 config.go:182] Loaded profile config "newest-cni-296225": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 13:29:31.787320 531586 addons.go:69] Setting metrics-server=true in profile "newest-cni-296225"
I0127 13:29:31.787353 531586 addons.go:238] Setting addon metrics-server=true in "newest-cni-296225"
W0127 13:29:31.787367 531586 addons.go:247] addon metrics-server should already be in state true
I0127 13:29:31.787318 531586 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-296225"
W0127 13:29:31.787388 531586 addons.go:247] addon storage-provisioner should already be in state true
I0127 13:29:31.787413 531586 host.go:66] Checking if "newest-cni-296225" exists ...
I0127 13:29:31.787446 531586 host.go:66] Checking if "newest-cni-296225" exists ...
I0127 13:29:31.787286 531586 addons.go:69] Setting dashboard=true in profile "newest-cni-296225"
I0127 13:29:31.787526 531586 addons.go:238] Setting addon dashboard=true in "newest-cni-296225"
W0127 13:29:31.787557 531586 addons.go:247] addon dashboard should already be in state true
I0127 13:29:31.787597 531586 host.go:66] Checking if "newest-cni-296225" exists ...
I0127 13:29:31.787246 531586 addons.go:69] Setting default-storageclass=true in profile "newest-cni-296225"
I0127 13:29:31.787654 531586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-296225"
I0127 13:29:31.787886 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.787922 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.787946 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.787971 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.788040 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.788067 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.788279 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.788348 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.791198 531586 out.go:177] * Verifying Kubernetes components...
I0127 13:29:31.792729 531586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 13:29:31.809862 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
I0127 13:29:31.810576 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.810735 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
I0127 13:29:31.811453 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.811479 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.811565 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.812009 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.812033 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.812507 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.814254 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
I0127 13:29:31.814774 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.815750 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.816710 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.816754 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.817133 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.817157 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.817572 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.818143 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.818200 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.819519 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:29:31.824362 531586 addons.go:238] Setting addon default-storageclass=true in "newest-cni-296225"
W0127 13:29:31.824386 531586 addons.go:247] addon default-storageclass should already be in state true
I0127 13:29:31.824421 531586 host.go:66] Checking if "newest-cni-296225" exists ...
I0127 13:29:31.824804 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.824849 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.835403 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
I0127 13:29:31.836274 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.836962 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.836997 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.837484 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.838061 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.838106 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.839703 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37671
I0127 13:29:31.844903 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
I0127 13:29:31.850434 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
I0127 13:29:31.864579 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.864731 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.864805 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.865332 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.865353 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.865507 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.865520 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.865755 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.865888 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.866153 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:29:31.866263 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.866280 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.866349 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:29:31.866765 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.867410 531586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 13:29:31.867459 531586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:29:31.869030 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:31.870746 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:31.871229 531586 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 13:29:31.872679 531586 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 13:29:31.872852 531586 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:31.872877 531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 13:29:31.872899 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:31.874840 531586 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 13:29:31.874867 531586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 13:29:31.874889 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:31.879359 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.879992 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.880845 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:31.880876 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.880911 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:31.880935 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.881182 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:31.881276 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:31.881374 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:31.881423 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:31.881494 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:31.881545 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:31.881692 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:31.881713 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:31.890590 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
I0127 13:29:31.891311 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.891961 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.891983 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.892382 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.892632 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:29:31.894810 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:31.895223 531586 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:31.895240 531586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 13:29:31.895450 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:31.895697 531586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
I0127 13:29:31.896698 531586 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:31.897633 531586 main.go:141] libmachine: Using API Version 1
I0127 13:29:31.897658 531586 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:31.898129 531586 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:31.898280 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetState
I0127 13:29:31.899110 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.899759 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:31.899782 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.899962 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:31.900155 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:31.900337 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:31.900466 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:31.904472 531586 main.go:141] libmachine: (newest-cni-296225) Calling .DriverName
I0127 13:29:31.907054 531586 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 13:29:31.908332 531586 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 13:29:28.595128 529417 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 13:29:28.595147 529417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 13:29:28.595179 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
I0127 13:29:28.596235 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.597222 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
I0127 13:29:28.597304 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.597628 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
I0127 13:29:28.597788 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
I0127 13:29:28.597943 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
I0127 13:29:28.598078 529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
I0127 13:29:28.599130 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.599670 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
I0127 13:29:28.599694 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.599880 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
I0127 13:29:28.600049 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
I0127 13:29:28.600195 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
I0127 13:29:28.600327 529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
I0127 13:29:28.610825 529417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
I0127 13:29:28.611379 529417 main.go:141] libmachine: () Calling .GetVersion
I0127 13:29:28.611919 529417 main.go:141] libmachine: Using API Version 1
I0127 13:29:28.611939 529417 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:29:28.612288 529417 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:29:28.612480 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetState
I0127 13:29:28.614326 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .DriverName
I0127 13:29:28.614636 529417 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:28.614668 529417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 13:29:28.614688 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHHostname
I0127 13:29:28.618088 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.618805 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6c", ip: ""} in network mk-default-k8s-diff-port-325510: {Iface:virbr3 ExpiryTime:2025-01-27 14:24:40 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6c Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-325510 Clientid:01:52:54:00:c4:f9:6c}
I0127 13:29:28.618958 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | domain default-k8s-diff-port-325510 has defined IP address 192.168.61.7 and MAC address 52:54:00:c4:f9:6c in network mk-default-k8s-diff-port-325510
I0127 13:29:28.619294 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHPort
I0127 13:29:28.619517 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHKeyPath
I0127 13:29:28.619738 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .GetSSHUsername
I0127 13:29:28.619953 529417 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/default-k8s-diff-port-325510/id_rsa Username:docker}
I0127 13:29:28.750007 529417 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:29:28.770798 529417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-325510" to be "Ready" ...
I0127 13:29:28.794753 529417 node_ready.go:49] node "default-k8s-diff-port-325510" has status "Ready":"True"
I0127 13:29:28.794783 529417 node_ready.go:38] duration metric: took 23.945006ms for node "default-k8s-diff-port-325510" to be "Ready" ...
I0127 13:29:28.794796 529417 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:29:28.801618 529417 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:28.841055 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 13:29:28.841089 529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 13:29:28.865445 529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 13:29:28.865479 529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 13:29:28.870120 529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:28.887649 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 13:29:28.887691 529417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 13:29:28.908488 529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:28.926717 529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 13:29:28.926752 529417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 13:29:28.949234 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 13:29:28.949269 529417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 13:29:28.983403 529417 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:28.983438 529417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 13:29:29.010532 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 13:29:29.010567 529417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 13:29:29.085215 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 13:29:29.085250 529417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 13:29:29.085479 529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:29.180902 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 13:29:29.180935 529417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 13:29:29.239792 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 13:29:29.239830 529417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 13:29:29.350534 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 13:29:29.350566 529417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 13:29:29.463271 529417 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:29.463315 529417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 13:29:29.551176 529417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:30.055621 529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.147081618s)
I0127 13:29:30.055704 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.055723 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.056191 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.056215 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.056226 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.056255 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.056323 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.056341 529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18618522s)
I0127 13:29:30.056436 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.056465 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.056627 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.056649 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.056963 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.058774 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.058792 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.058808 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.058817 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.059068 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.059083 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.059098 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.083977 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.084003 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.084571 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.084583 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.084595 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.830919 529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:30.961132 529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875594685s)
I0127 13:29:30.961202 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.961219 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.963600 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.963608 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.963645 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.963654 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:30.963662 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:30.964368 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) DBG | Closing plugin on server side
I0127 13:29:30.964392 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:30.964451 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:30.964463 529417 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-325510"
I0127 13:29:32.478187 529417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.926948394s)
I0127 13:29:32.478257 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:32.478272 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:32.478650 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:32.478671 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:32.478683 529417 main.go:141] libmachine: Making call to close driver server
I0127 13:29:32.478693 529417 main.go:141] libmachine: (default-k8s-diff-port-325510) Calling .Close
I0127 13:29:32.479015 529417 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:32.479033 529417 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:32.482147 529417 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-325510 addons enable metrics-server
I0127 13:29:32.483736 529417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 13:29:32.484840 529417 addons.go:514] duration metric: took 3.958103252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 13:29:31.909581 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 13:29:31.909609 531586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 13:29:31.909639 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHHostname
I0127 13:29:31.913216 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.913664 531586 main.go:141] libmachine: (newest-cni-296225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:60:c9", ip: ""} in network mk-newest-cni-296225: {Iface:virbr4 ExpiryTime:2025-01-27 14:29:09 +0000 UTC Type:0 Mac:52:54:00:25:60:c9 Iaid: IPaddr:192.168.72.46 Prefix:24 Hostname:newest-cni-296225 Clientid:01:52:54:00:25:60:c9}
I0127 13:29:31.913695 531586 main.go:141] libmachine: (newest-cni-296225) DBG | domain newest-cni-296225 has defined IP address 192.168.72.46 and MAC address 52:54:00:25:60:c9 in network mk-newest-cni-296225
I0127 13:29:31.913996 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHPort
I0127 13:29:31.914211 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHKeyPath
I0127 13:29:31.914377 531586 main.go:141] libmachine: (newest-cni-296225) Calling .GetSSHUsername
I0127 13:29:31.914514 531586 sshutil.go:53] new ssh client: &{IP:192.168.72.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-466901/.minikube/machines/newest-cni-296225/id_rsa Username:docker}
I0127 13:29:32.089563 531586 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 13:29:32.127765 531586 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:29:32.127896 531586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:32.149480 531586 api_server.go:72] duration metric: took 362.501205ms to wait for apiserver process to appear ...
I0127 13:29:32.149531 531586 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:29:32.149576 531586 api_server.go:253] Checking apiserver healthz at https://192.168.72.46:8443/healthz ...
I0127 13:29:32.170573 531586 api_server.go:279] https://192.168.72.46:8443/healthz returned 200:
ok
I0127 13:29:32.171739 531586 api_server.go:141] control plane version: v1.32.1
I0127 13:29:32.171771 531586 api_server.go:131] duration metric: took 22.230634ms to wait for apiserver health ...
I0127 13:29:32.171784 531586 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:29:32.186307 531586 system_pods.go:59] 8 kube-system pods found
I0127 13:29:32.186342 531586 system_pods.go:61] "coredns-668d6bf9bc-xvbfh" [0d7c4469-d90e-4487-8433-1167183525e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 13:29:32.186349 531586 system_pods.go:61] "etcd-newest-cni-296225" [97ed55b3-82a8-4ecf-a721-26a592f2c8cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 13:29:32.186360 531586 system_pods.go:61] "kube-apiserver-newest-cni-296225" [d31606a7-2b78-4859-80a7-35b783b0a444] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 13:29:32.186368 531586 system_pods.go:61] "kube-controller-manager-newest-cni-296225" [4d6c4da8-a13a-44c2-a877-13b9453142a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 13:29:32.186373 531586 system_pods.go:61] "kube-proxy-dzvvc" [1ec15899-c7eb-436d-9e74-fadde7ecacb8] Running
I0127 13:29:32.186380 531586 system_pods.go:61] "kube-scheduler-newest-cni-296225" [2c230f78-68ac-4abb-9cdd-5cf666793981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 13:29:32.186388 531586 system_pods.go:61] "metrics-server-f79f97bbb-2pv7p" [1246f427-ed62-4202-8170-5ae96be7ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:29:32.186393 531586 system_pods.go:61] "storage-provisioner" [7b83dbf7-d497-42bb-9489-614ae5ba76fa] Running
I0127 13:29:32.186408 531586 system_pods.go:74] duration metric: took 14.616708ms to wait for pod list to return data ...
I0127 13:29:32.186420 531586 default_sa.go:34] waiting for default service account to be created ...
I0127 13:29:32.194387 531586 default_sa.go:45] found service account: "default"
I0127 13:29:32.194429 531586 default_sa.go:55] duration metric: took 7.999321ms for default service account to be created ...
I0127 13:29:32.194447 531586 kubeadm.go:582] duration metric: took 407.475818ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 13:29:32.194469 531586 node_conditions.go:102] verifying NodePressure condition ...
I0127 13:29:32.215128 531586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 13:29:32.215228 531586 node_conditions.go:123] node cpu capacity is 2
I0127 13:29:32.215257 531586 node_conditions.go:105] duration metric: took 20.782574ms to run NodePressure ...
I0127 13:29:32.215325 531586 start.go:241] waiting for startup goroutines ...
I0127 13:29:32.224708 531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 13:29:32.224738 531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 13:29:32.233504 531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 13:29:32.295258 531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 13:29:32.295311 531586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 13:29:32.340500 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 13:29:32.340623 531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 13:29:32.552816 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 13:29:32.552969 531586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 13:29:32.615247 531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 13:29:32.615684 531586 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:32.615709 531586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 13:29:32.772893 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 13:29:32.772938 531586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 13:29:32.831244 531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 13:29:32.939523 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 13:29:32.939558 531586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 13:29:33.121982 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 13:29:33.122026 531586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 13:29:33.248581 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 13:29:33.248619 531586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 13:29:33.339337 531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.105786367s)
I0127 13:29:33.339401 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:33.339413 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:33.341380 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
I0127 13:29:33.341463 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:33.341484 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:33.341498 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:33.341511 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:33.342973 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
I0127 13:29:33.342984 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:33.342995 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:33.350366 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:33.350388 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:33.350671 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:33.350685 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:33.367462 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 13:29:33.367490 531586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 13:29:33.428952 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 13:29:33.428989 531586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 13:29:33.512094 531586 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:33.512127 531586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 13:29:33.585612 531586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 13:29:34.628686 531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.013367863s)
I0127 13:29:34.628749 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:34.628761 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:34.629106 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:34.629133 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:34.629143 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:34.629153 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:34.629394 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:34.629407 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:34.834013 531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.002708663s)
I0127 13:29:34.834087 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:34.834105 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:34.834399 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:34.834418 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:34.834427 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:34.834435 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:34.834714 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:34.834733 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:34.834746 531586 addons.go:479] Verifying addon metrics-server=true in "newest-cni-296225"
I0127 13:29:35.573250 531586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.987594335s)
I0127 13:29:35.573316 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:35.573332 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:35.573696 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
I0127 13:29:35.573748 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:35.573762 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:35.573820 531586 main.go:141] libmachine: Making call to close driver server
I0127 13:29:35.573835 531586 main.go:141] libmachine: (newest-cni-296225) Calling .Close
I0127 13:29:35.574254 531586 main.go:141] libmachine: (newest-cni-296225) DBG | Closing plugin on server side
I0127 13:29:35.575985 531586 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:29:35.576005 531586 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:29:35.577914 531586 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-296225 addons enable metrics-server
I0127 13:29:35.579611 531586 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 13:29:35.580983 531586 addons.go:514] duration metric: took 3.79397273s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 13:29:35.581031 531586 start.go:246] waiting for cluster config update ...
I0127 13:29:35.581050 531586 start.go:255] writing updated cluster config ...
I0127 13:29:35.581368 531586 ssh_runner.go:195] Run: rm -f paused
I0127 13:29:35.638909 531586 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 13:29:35.640552 531586 out.go:177] * Done! kubectl is now configured to use "newest-cni-296225" cluster and "default" namespace by default
I0127 13:29:33.314653 529417 pod_ready.go:103] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:34.308087 529417 pod_ready.go:93] pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:34.308114 529417 pod_ready.go:82] duration metric: took 5.506466228s for pod "etcd-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:34.308126 529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:34.314009 529417 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:34.314033 529417 pod_ready.go:82] duration metric: took 5.900062ms for pod "kube-apiserver-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:34.314044 529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:34.321801 529417 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:34.321823 529417 pod_ready.go:82] duration metric: took 7.77255ms for pod "kube-controller-manager-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:34.321836 529417 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:36.328661 529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:38.833405 529417 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"False"
I0127 13:29:39.331942 529417 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace has status "Ready":"True"
I0127 13:29:39.331971 529417 pod_ready.go:82] duration metric: took 5.010119744s for pod "kube-scheduler-default-k8s-diff-port-325510" in "kube-system" namespace to be "Ready" ...
I0127 13:29:39.331983 529417 pod_ready.go:39] duration metric: took 10.537174991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 13:29:39.332004 529417 api_server.go:52] waiting for apiserver process to appear ...
I0127 13:29:39.332061 529417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 13:29:39.364826 529417 api_server.go:72] duration metric: took 10.838138782s to wait for apiserver process to appear ...
I0127 13:29:39.364856 529417 api_server.go:88] waiting for apiserver healthz status ...
I0127 13:29:39.364880 529417 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
I0127 13:29:39.395339 529417 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
ok
I0127 13:29:39.403463 529417 api_server.go:141] control plane version: v1.32.1
I0127 13:29:39.403502 529417 api_server.go:131] duration metric: took 38.63787ms to wait for apiserver health ...
I0127 13:29:39.403515 529417 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 13:29:39.428974 529417 system_pods.go:59] 9 kube-system pods found
I0127 13:29:39.429008 529417 system_pods.go:61] "coredns-668d6bf9bc-mgxmm" [15f65844-c002-4253-9f43-609e6d3d86c0] Running
I0127 13:29:39.429013 529417 system_pods.go:61] "coredns-668d6bf9bc-rlvv2" [b116f02c-d30f-4869-bef1-55722f0f1a58] Running
I0127 13:29:39.429016 529417 system_pods.go:61] "etcd-default-k8s-diff-port-325510" [88fd4825-b74c-43e0-8a3e-fd60bb409b76] Running
I0127 13:29:39.429021 529417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-325510" [4eeff905-b36f-4be8-ac24-77c8421495c4] Running
I0127 13:29:39.429024 529417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-325510" [07956b85-b521-44cc-be77-675703803a17] Running
I0127 13:29:39.429027 529417 system_pods.go:61] "kube-proxy-gb24h" [d0d50b9f-b02f-49dd-9a7a-78e202ce247a] Running
I0127 13:29:39.429031 529417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-325510" [a7c2c0c5-c386-454d-9542-852b02901060] Running
I0127 13:29:39.429037 529417 system_pods.go:61] "metrics-server-f79f97bbb-vtvnn" [07e0c335-6a2b-4ef3-b153-3689cdb7ccaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 13:29:39.429041 529417 system_pods.go:61] "storage-provisioner" [7b76ca76-2bfc-44c4-bfc3-5ac3f4cde72b] Running
I0127 13:29:39.429048 529417 system_pods.go:74] duration metric: took 25.526569ms to wait for pod list to return data ...
I0127 13:29:39.429056 529417 default_sa.go:34] waiting for default service account to be created ...
I0127 13:29:39.449041 529417 default_sa.go:45] found service account: "default"
I0127 13:29:39.449083 529417 default_sa.go:55] duration metric: took 20.019081ms for default service account to be created ...
I0127 13:29:39.449098 529417 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 13:29:39.468326 529417 system_pods.go:87] 9 kube-system pods found
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2c59218aeb0b4 523cad1a4df73 38 seconds ago Exited dashboard-metrics-scraper 9 b97b8e84adc01 dashboard-metrics-scraper-86c6bf9756-whltq
63d1c3b56e594 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 f131828d89e6e kubernetes-dashboard-7779f9b69b-l74bx
69d92ad422477 6e38f40d628db 21 minutes ago Running storage-provisioner 0 46d2d2a34739d storage-provisioner
f328b03590da3 c69fa2e9cbf5f 21 minutes ago Running coredns 0 193a5d0860335 coredns-668d6bf9bc-4qzkt
c124cf3989669 c69fa2e9cbf5f 21 minutes ago Running coredns 0 0c04e67152ad0 coredns-668d6bf9bc-hpb7s
310d5e851b70e e29f9c7391fd9 22 minutes ago Running kube-proxy 0 ef472172035c0 kube-proxy-sxztd
f6cbefb95932d a9e7e6b294baf 22 minutes ago Running etcd 2 712e1d46f9460 etcd-no-preload-325431
8fc79b79be3e9 95c0bda56fc4d 22 minutes ago Running kube-apiserver 2 7eb1a821a76e9 kube-apiserver-no-preload-325431
9c420da9d39ea 2b0d6572d062c 22 minutes ago Running kube-scheduler 2 bbb87051682aa kube-scheduler-no-preload-325431
08725f33f2201 019ee182b58e2 22 minutes ago Running kube-controller-manager 2 9d90f8ac6f519 kube-controller-manager-no-preload-325431
==> containerd <==
Jan 27 13:44:42 no-preload-325431 containerd[556]: time="2025-01-27T13:44:42.196435020Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 13:44:42 no-preload-325431 containerd[556]: time="2025-01-27T13:44:42.196676418Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.186537465Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.216616165Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.217860425Z" level=info msg="StartContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.301749074Z" level=info msg="StartContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\" returns successfully"
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.361917812Z" level=info msg="shim disconnected" id=75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6 namespace=k8s.io
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.361994213Z" level=warning msg="cleaning up after shim disconnected" id=75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6 namespace=k8s.io
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.362004564Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 13:44:53 no-preload-325431 containerd[556]: time="2025-01-27T13:44:53.382397677Z" level=warning msg="cleanup warnings time=\"2025-01-27T13:44:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 27 13:44:54 no-preload-325431 containerd[556]: time="2025-01-27T13:44:54.147046811Z" level=info msg="RemoveContainer for \"b976f57e1830de2e572ff5852dd68053d68d7485608441238b1d167515b5200c\""
Jan 27 13:44:54 no-preload-325431 containerd[556]: time="2025-01-27T13:44:54.158663221Z" level=info msg="RemoveContainer for \"b976f57e1830de2e572ff5852dd68053d68d7485608441238b1d167515b5200c\" returns successfully"
Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.184063663Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.195148681Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.197738662Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 13:49:53 no-preload-325431 containerd[556]: time="2025-01-27T13:49:53.197792325Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.185892120Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.213473862Z" level=info msg="CreateContainer within sandbox \"b97b8e84adc017d4e671df39270b8b353d95d6d8c37314624eb1fc6e6a6ca4f1\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\""
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.214545707Z" level=info msg="StartContainer for \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\""
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.300793920Z" level=info msg="StartContainer for \"2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0\" returns successfully"
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354783382Z" level=info msg="shim disconnected" id=2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0 namespace=k8s.io
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354850223Z" level=warning msg="cleaning up after shim disconnected" id=2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0 namespace=k8s.io
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.354862715Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.914494028Z" level=info msg="RemoveContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\""
Jan 27 13:49:56 no-preload-325431 containerd[556]: time="2025-01-27T13:49:56.922285640Z" level=info msg="RemoveContainer for \"75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6\" returns successfully"
==> coredns [c124cf39896699b77317720c2e7e03c7013edb4a0c398425791784c0bb22c08a] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [f328b03590da3b51a135d8436bb74ffaef7b999a0d57f694e8cd0ee45d9cd4fb] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: no-preload-325431
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-325431
kubernetes.io/os=linux
minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
minikube.k8s.io/name=no-preload-325431
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T13_28_29_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 13:28:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: no-preload-325431
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 13:50:25 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 13:49:22 +0000 Mon, 27 Jan 2025 13:28:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 13:49:22 +0000 Mon, 27 Jan 2025 13:28:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 13:49:22 +0000 Mon, 27 Jan 2025 13:28:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 13:49:22 +0000 Mon, 27 Jan 2025 13:28:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.116
Hostname: no-preload-325431
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 102af14d504a46e9aa5f69946e6b1af9
System UUID: 102af14d-504a-46e9-aa5f-69946e6b1af9
Boot ID: baa560a6-23ce-43ec-bfff-051eeec1c311
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-4qzkt 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 22m
kube-system coredns-668d6bf9bc-hpb7s 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 22m
kube-system etcd-no-preload-325431 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 22m
kube-system kube-apiserver-no-preload-325431 250m (12%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-controller-manager-no-preload-325431 200m (10%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-proxy-sxztd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-scheduler-no-preload-325431 100m (5%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system metrics-server-f79f97bbb-z7vjh 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 22m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-whltq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-l74bx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21m kube-proxy
Normal Starting 22m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 22m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 22m kubelet Node no-preload-325431 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 22m kubelet Node no-preload-325431 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 22m kubelet Node no-preload-325431 status is now: NodeHasSufficientPID
Normal RegisteredNode 22m node-controller Node no-preload-325431 event: Registered Node no-preload-325431 in Controller
==> dmesg <==
[ +0.042610] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.986924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.897118] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.653093] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.709826] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
[ +0.056055] kauditd_printk_skb: 1 callbacks suppressed
[ +0.061448] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
[ +0.189315] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
[ +0.121647] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
[ +0.293864] systemd-fstab-generator[548]: Ignoring "noauto" option for root device
[ +1.489196] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
[ +2.327408] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
[ +0.901291] kauditd_printk_skb: 225 callbacks suppressed
[Jan27 13:24] kauditd_printk_skb: 40 callbacks suppressed
[ +12.107985] kauditd_printk_skb: 82 callbacks suppressed
[Jan27 13:28] systemd-fstab-generator[3094]: Ignoring "noauto" option for root device
[ +7.082264] systemd-fstab-generator[3486]: Ignoring "noauto" option for root device
[ +0.138438] kauditd_printk_skb: 87 callbacks suppressed
[ +4.889129] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
[ +0.116028] kauditd_printk_skb: 12 callbacks suppressed
[ +5.188036] kauditd_printk_skb: 84 callbacks suppressed
[ +5.170159] kauditd_printk_skb: 16 callbacks suppressed
[ +5.775732] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [f6cbefb95932d1ca8f242ac48b345cd84e86e1645198bb4017ab78eb469c44c1] <==
{"level":"info","ts":"2025-01-27T13:28:29.466666Z","caller":"traceutil/trace.go:171","msg":"trace[89735720] transaction","detail":"{read_only:false; response_revision:251; number_of_response:1; }","duration":"133.234985ms","start":"2025-01-27T13:28:29.325047Z","end":"2025-01-27T13:28:29.458282Z","steps":["trace[89735720] 'process raft request' (duration: 65.367463ms)","trace[89735720] 'compare' (duration: 67.453248ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T13:28:30.022627Z","caller":"traceutil/trace.go:171","msg":"trace[1688003952] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"124.963577ms","start":"2025-01-27T13:28:29.897637Z","end":"2025-01-27T13:28:30.022600Z","steps":["trace[1688003952] 'process raft request' (duration: 41.933674ms)","trace[1688003952] 'compare' (duration: 82.611068ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T13:28:45.692324Z","caller":"traceutil/trace.go:171","msg":"trace[1329335384] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"153.549674ms","start":"2025-01-27T13:28:45.538747Z","end":"2025-01-27T13:28:45.692297Z","steps":["trace[1329335384] 'read index received' (duration: 153.240994ms)","trace[1329335384] 'applied index is now lower than readState.Index' (duration: 308.026µs)"],"step_count":2}
{"level":"info","ts":"2025-01-27T13:28:45.692563Z","caller":"traceutil/trace.go:171","msg":"trace[1413168219] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"180.289564ms","start":"2025-01-27T13:28:45.512261Z","end":"2025-01-27T13:28:45.692550Z","steps":["trace[1413168219] 'process raft request' (duration: 179.777083ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T13:28:45.692632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.862618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T13:28:45.694495Z","caller":"traceutil/trace.go:171","msg":"trace[1326562105] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:501; }","duration":"155.73594ms","start":"2025-01-27T13:28:45.538719Z","end":"2025-01-27T13:28:45.694455Z","steps":["trace[1326562105] 'agreement among raft nodes before linearized reading' (duration: 153.869235ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T13:28:48.289928Z","caller":"traceutil/trace.go:171","msg":"trace[371225448] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:523; }","duration":"102.015314ms","start":"2025-01-27T13:28:48.187893Z","end":"2025-01-27T13:28:48.289908Z","steps":["trace[371225448] 'read index received' (duration: 102.010597ms)","trace[371225448] 'applied index is now lower than readState.Index' (duration: 3.703µs)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T13:28:48.292572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.11503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-z7vjh.181e8fb584d28d42\" limit:1 ","response":"range_response_count:1 size:814"}
{"level":"info","ts":"2025-01-27T13:28:48.292612Z","caller":"traceutil/trace.go:171","msg":"trace[1018723515] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-z7vjh.181e8fb584d28d42; range_end:; response_count:1; response_revision:507; }","duration":"101.275248ms","start":"2025-01-27T13:28:48.191323Z","end":"2025-01-27T13:28:48.292598Z","steps":["trace[1018723515] 'agreement among raft nodes before linearized reading' (duration: 101.163297ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T13:28:48.292960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.053319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-f79f97bbb-z7vjh\" limit:1 ","response":"range_response_count:1 size:4559"}
{"level":"info","ts":"2025-01-27T13:28:48.292997Z","caller":"traceutil/trace.go:171","msg":"trace[225346907] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-f79f97bbb-z7vjh; range_end:; response_count:1; response_revision:507; }","duration":"105.099208ms","start":"2025-01-27T13:28:48.187887Z","end":"2025-01-27T13:28:48.292986Z","steps":["trace[225346907] 'agreement among raft nodes before linearized reading' (duration: 105.017786ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T13:28:48.289515Z","caller":"traceutil/trace.go:171","msg":"trace[142542872] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"113.448527ms","start":"2025-01-27T13:28:48.176045Z","end":"2025-01-27T13:28:48.289493Z","steps":["trace[142542872] 'process raft request' (duration: 113.278071ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T13:28:48.452957Z","caller":"traceutil/trace.go:171","msg":"trace[1333608902] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"115.97828ms","start":"2025-01-27T13:28:48.336968Z","end":"2025-01-27T13:28:48.452946Z","steps":["trace[1333608902] 'process raft request' (duration: 115.606445ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T13:28:48.453348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.781956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T13:28:48.453378Z","caller":"traceutil/trace.go:171","msg":"trace[232578695] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:509; }","duration":"112.859513ms","start":"2025-01-27T13:28:48.340511Z","end":"2025-01-27T13:28:48.453370Z","steps":["trace[232578695] 'agreement among raft nodes before linearized reading' (duration: 112.79216ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T13:28:48.453259Z","caller":"traceutil/trace.go:171","msg":"trace[662514671] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"112.118516ms","start":"2025-01-27T13:28:48.340584Z","end":"2025-01-27T13:28:48.452702Z","steps":["trace[662514671] 'read index received' (duration: 3.389634ms)","trace[662514671] 'applied index is now lower than readState.Index' (duration: 108.728402ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T13:38:23.222020Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
{"level":"info","ts":"2025-01-27T13:38:23.265148Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":830,"took":"41.343905ms","hash":1792461569,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
{"level":"info","ts":"2025-01-27T13:38:23.265777Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1792461569,"revision":830,"compact-revision":-1}
{"level":"info","ts":"2025-01-27T13:43:23.230793Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
{"level":"info","ts":"2025-01-27T13:43:23.236676Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1081,"took":"4.865267ms","hash":2239879784,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T13:43:23.236926Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2239879784,"revision":1081,"compact-revision":830}
{"level":"info","ts":"2025-01-27T13:48:23.248481Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1341}
{"level":"info","ts":"2025-01-27T13:48:23.253651Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1341,"took":"4.444244ms","hash":281533610,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1863680,"current-db-size-in-use":"1.9 MB"}
{"level":"info","ts":"2025-01-27T13:48:23.253852Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":281533610,"revision":1341,"compact-revision":1081}
==> kernel <==
13:50:34 up 26 min, 0 users, load average: 0.09, 0.21, 0.21
Linux no-preload-325431 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [8fc79b79be3e960c95d6b40d47a560e4273b820618fabc89243fe61b8514ae93] <==
I0127 13:46:25.981080 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 13:46:25.982305 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 13:48:24.977938 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 13:48:24.978277 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 13:48:25.980341 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 13:48:25.980431 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0127 13:48:25.980471 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 13:48:25.980785 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 13:48:25.981711 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 13:48:25.982900 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 13:49:25.982483 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 13:49:25.982630 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0127 13:49:25.983653 1 handler_proxy.go:99] no RequestInfo found in the context
I0127 13:49:25.983840 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0127 13:49:25.983722 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 13:49:25.985927 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [08725f33f22015afbb4e9b267b2f8f3613d5ec097e94f2964d261efc74bdea31] <==
E0127 13:46:02.755286 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:46:02.827040 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:46:32.762523 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:46:32.835723 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:47:02.770373 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:47:02.843857 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:47:32.778488 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:47:32.851664 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:48:02.785452 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:48:02.862747 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:48:32.794635 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:48:32.871792 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 13:49:02.801635 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:49:02.885565 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 13:49:22.679537 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-325431"
E0127 13:49:32.811400 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:49:32.894581 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 13:49:56.935197 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="112.417µs"
E0127 13:50:02.819308 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:50:02.903505 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 13:50:04.915538 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="57.797µs"
I0127 13:50:07.199778 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="64.337µs"
I0127 13:50:20.211441 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="63.231µs"
E0127 13:50:32.828450 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 13:50:32.911645 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [310d5e851b70e308d600ebdd221377ceadf9a6ff38cc099849a8a2506647bcb8] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 13:28:35.726645 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 13:28:35.742917 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.116"]
E0127 13:28:35.743010 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 13:28:35.911764 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 13:28:35.911813 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 13:28:35.911838 1 server_linux.go:170] "Using iptables Proxier"
I0127 13:28:35.939973 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 13:28:35.940350 1 server.go:497] "Version info" version="v1.32.1"
I0127 13:28:35.940363 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 13:28:35.961605 1 config.go:329] "Starting node config controller"
I0127 13:28:35.961653 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 13:28:35.967209 1 config.go:199] "Starting service config controller"
I0127 13:28:35.967284 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 13:28:35.967317 1 config.go:105] "Starting endpoint slice config controller"
I0127 13:28:35.967321 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 13:28:36.063268 1 shared_informer.go:320] Caches are synced for node config
I0127 13:28:36.067958 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 13:28:36.068053 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [9c420da9d39eae7f0ea1c575c0892ac22db6c016c9dee10f72698622302c559d] <==
W0127 13:28:25.855589 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0127 13:28:25.855662 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:25.868938 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 13:28:25.869011 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 13:28:25.914897 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0127 13:28:25.915012 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:25.924285 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 13:28:25.924355 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:25.949640 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0127 13:28:25.949727 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.157713 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 13:28:26.157791 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0127 13:28:26.183499 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0127 13:28:26.183597 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.221238 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 13:28:26.221291 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.263046 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0127 13:28:26.263165 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.273689 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 13:28:26.273724 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.286357 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 13:28:26.286394 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 13:28:26.320437 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 13:28:26.320498 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0127 13:28:29.172627 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 13:49:40 no-preload-325431 kubelet[3493]: E0127 13:49:40.186224 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
Jan 27 13:49:44 no-preload-325431 kubelet[3493]: I0127 13:49:44.182963 3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
Jan 27 13:49:44 no-preload-325431 kubelet[3493]: E0127 13:49:44.183752 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.198337 3493 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.198713 3493 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.199048 3493 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-z7vjh_kube-system(f904e246-cad3-4c86-8a01-f8eea49bf563): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 27 13:49:53 no-preload-325431 kubelet[3493]: E0127 13:49:53.200612 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.182486 3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.912016 3493 scope.go:117] "RemoveContainer" containerID="75e634727176de37a035b7ccbeb3bacc76576d3f33522560713fd2dd4075a6c6"
Jan 27 13:49:56 no-preload-325431 kubelet[3493]: I0127 13:49:56.912438 3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
Jan 27 13:49:56 no-preload-325431 kubelet[3493]: E0127 13:49:56.912646 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
Jan 27 13:50:04 no-preload-325431 kubelet[3493]: I0127 13:50:04.891072 3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
Jan 27 13:50:04 no-preload-325431 kubelet[3493]: E0127 13:50:04.891911 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
Jan 27 13:50:07 no-preload-325431 kubelet[3493]: E0127 13:50:07.183300 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
Jan 27 13:50:16 no-preload-325431 kubelet[3493]: I0127 13:50:16.185914 3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
Jan 27 13:50:16 no-preload-325431 kubelet[3493]: E0127 13:50:16.186200 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
Jan 27 13:50:20 no-preload-325431 kubelet[3493]: E0127 13:50:20.183545 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
Jan 27 13:50:28 no-preload-325431 kubelet[3493]: E0127 13:50:28.206472 3493 iptables.go:577] "Could not set up iptables canary" err=<
Jan 27 13:50:28 no-preload-325431 kubelet[3493]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 27 13:50:28 no-preload-325431 kubelet[3493]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 27 13:50:28 no-preload-325431 kubelet[3493]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 27 13:50:28 no-preload-325431 kubelet[3493]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 27 13:50:29 no-preload-325431 kubelet[3493]: I0127 13:50:29.182389 3493 scope.go:117] "RemoveContainer" containerID="2c59218aeb0b4d6b9b227cf4431b9d542a626a12d9b570a16068eeae269073c0"
Jan 27 13:50:29 no-preload-325431 kubelet[3493]: E0127 13:50:29.182835 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-whltq_kubernetes-dashboard(1b50763d-b860-4f23-92b4-31db0fc0acf2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-whltq" podUID="1b50763d-b860-4f23-92b4-31db0fc0acf2"
Jan 27 13:50:32 no-preload-325431 kubelet[3493]: E0127 13:50:32.183849 3493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-z7vjh" podUID="f904e246-cad3-4c86-8a01-f8eea49bf563"
==> kubernetes-dashboard [63d1c3b56e594b09fa04be7e99fd9b3090948c50a1e0413d623d6c1658fa2fbf] <==
2025/01/27 13:38:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:38:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:39:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:39:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:40:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:40:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:41:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:41:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:42:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:42:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:43:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:43:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:44:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:44:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:45:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:45:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:46:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:49:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:49:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 13:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [69d92ad422477870702389d231443715a6ccf0a5f7ffcac6d86ac0f46c9c7a46] <==
I0127 13:28:35.971454 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 13:28:36.012911 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 13:28:36.012974 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 13:28:36.034583 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 13:28:36.037007 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490!
I0127 13:28:36.038799 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62559846-3b0f-47fd-992f-23dd8f800587", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490 became leader
I0127 13:28:36.144365 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-325431_af4434bc-f34b-4451-ae25-c663fba38490!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-325431 -n no-preload-325431
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-325431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-z7vjh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh: exit status 1 (68.403671ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-z7vjh" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-325431 describe pod metrics-server-f79f97bbb-z7vjh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1623.09s)