=== RUN TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m29.889825874s)
-- stdout --
* [no-preload-976043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20319
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "no-preload-976043" primary control-plane node in "no-preload-976043" cluster
* Restarting existing kvm2 VM for "no-preload-976043" ...
* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-976043 addons enable metrics-server
* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
-- /stdout --
** stderr **
I0127 11:42:03.096599 397538 out.go:345] Setting OutFile to fd 1 ...
I0127 11:42:03.096697 397538 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:42:03.096709 397538 out.go:358] Setting ErrFile to fd 2...
I0127 11:42:03.096716 397538 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:42:03.096879 397538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 11:42:03.097419 397538 out.go:352] Setting JSON to false
I0127 11:42:03.098366 397538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8668,"bootTime":1737969455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 11:42:03.098467 397538 start.go:139] virtualization: kvm guest
I0127 11:42:03.100127 397538 out.go:177] * [no-preload-976043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 11:42:03.101226 397538 out.go:177] - MINIKUBE_LOCATION=20319
I0127 11:42:03.101319 397538 notify.go:220] Checking for updates...
I0127 11:42:03.103248 397538 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 11:42:03.104291 397538 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
I0127 11:42:03.105193 397538 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
I0127 11:42:03.106107 397538 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 11:42:03.107049 397538 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 11:42:03.108359 397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:42:03.108703 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:42:03.108755 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:42:03.124139 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
I0127 11:42:03.124588 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:42:03.125155 397538 main.go:141] libmachine: Using API Version 1
I0127 11:42:03.125177 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:42:03.125481 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:42:03.125688 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:03.125890 397538 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 11:42:03.126145 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:42:03.126181 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:42:03.140430 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
I0127 11:42:03.140751 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:42:03.141193 397538 main.go:141] libmachine: Using API Version 1
I0127 11:42:03.141215 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:42:03.141543 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:42:03.141731 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:03.174305 397538 out.go:177] * Using the kvm2 driver based on existing profile
I0127 11:42:03.175428 397538 start.go:297] selected driver: kvm2
I0127 11:42:03.175443 397538 start.go:901] validating driver "kvm2" against &{Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 11:42:03.175564 397538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 11:42:03.176243 397538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.176336 397538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 11:42:03.190164 397538 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 11:42:03.190564 397538 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 11:42:03.190600 397538 cni.go:84] Creating CNI manager for ""
I0127 11:42:03.190655 397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 11:42:03.190698 397538 start.go:340] cluster config:
{Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 11:42:03.190821 397538 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.192311 397538 out.go:177] * Starting "no-preload-976043" primary control-plane node in "no-preload-976043" cluster
I0127 11:42:03.193390 397538 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:42:03.193514 397538 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/config.json ...
I0127 11:42:03.193660 397538 cache.go:107] acquiring lock: {Name:mkb3b538314fd62eab2309dcd5112da57bc5e70f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193683 397538 cache.go:107] acquiring lock: {Name:mk15fb5de5283e9b279b6db3ee8dc9560c2058d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193688 397538 cache.go:107] acquiring lock: {Name:mkb29ea1858769de0fd0373c310163fc2fa627dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193754 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
I0127 11:42:03.193764 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0127 11:42:03.193775 397538 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.986µs
I0127 11:42:03.193768 397538 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 116.148µs
I0127 11:42:03.193777 397538 start.go:360] acquireMachinesLock for no-preload-976043: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 11:42:03.193792 397538 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0127 11:42:03.193767 397538 cache.go:107] acquiring lock: {Name:mk5cca8e3a1343f5fa2a41e9d49b890938823fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193814 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
I0127 11:42:03.193810 397538 cache.go:107] acquiring lock: {Name:mk40f0ef462377ecb38e4605d0b4126cd486f9ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193821 397538 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 151.448µs
I0127 11:42:03.193830 397538 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
I0127 11:42:03.193806 397538 cache.go:107] acquiring lock: {Name:mkcde954d35adaaae82458dd5942fd51fc6d4bb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193895 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
I0127 11:42:03.193795 397538 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
I0127 11:42:03.193775 397538 cache.go:107] acquiring lock: {Name:mkd4e82fceee3273a1d5d1b137294af730b923cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.193915 397538 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 144.555µs
I0127 11:42:03.193932 397538 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
I0127 11:42:03.193938 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0127 11:42:03.193905 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
I0127 11:42:03.193948 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
I0127 11:42:03.193948 397538 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 140.139µs
I0127 11:42:03.193954 397538 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 226.702µs
I0127 11:42:03.193959 397538 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0127 11:42:03.193962 397538 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
I0127 11:42:03.193960 397538 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 187.88µs
I0127 11:42:03.193970 397538 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0127 11:42:03.193964 397538 cache.go:107] acquiring lock: {Name:mk24732c35ec239b8e7de95e39891c358710fa1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:42:03.194057 397538 cache.go:115] /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
I0127 11:42:03.194071 397538 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 156.361µs
I0127 11:42:03.194080 397538 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20319-348858/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
I0127 11:42:03.194086 397538 cache.go:87] Successfully saved all images to host disk.
I0127 11:42:13.374117 397538 start.go:364] duration metric: took 10.180279664s to acquireMachinesLock for "no-preload-976043"
I0127 11:42:13.374219 397538 start.go:96] Skipping create...Using existing machine configuration
I0127 11:42:13.374233 397538 fix.go:54] fixHost starting:
I0127 11:42:13.374751 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:42:13.374820 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:42:13.391642 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
I0127 11:42:13.392129 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:42:13.392697 397538 main.go:141] libmachine: Using API Version 1
I0127 11:42:13.392719 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:42:13.393131 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:42:13.393340 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:13.393471 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:42:13.395004 397538 fix.go:112] recreateIfNeeded on no-preload-976043: state=Stopped err=<nil>
I0127 11:42:13.395037 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
W0127 11:42:13.395190 397538 fix.go:138] unexpected machine state, will restart: <nil>
I0127 11:42:13.397334 397538 out.go:177] * Restarting existing kvm2 VM for "no-preload-976043" ...
I0127 11:42:13.398351 397538 main.go:141] libmachine: (no-preload-976043) Calling .Start
I0127 11:42:13.398480 397538 main.go:141] libmachine: (no-preload-976043) starting domain...
I0127 11:42:13.398507 397538 main.go:141] libmachine: (no-preload-976043) ensuring networks are active...
I0127 11:42:13.399264 397538 main.go:141] libmachine: (no-preload-976043) Ensuring network default is active
I0127 11:42:13.399609 397538 main.go:141] libmachine: (no-preload-976043) Ensuring network mk-no-preload-976043 is active
I0127 11:42:13.399975 397538 main.go:141] libmachine: (no-preload-976043) getting domain XML...
I0127 11:42:13.400687 397538 main.go:141] libmachine: (no-preload-976043) creating domain...
I0127 11:42:13.745414 397538 main.go:141] libmachine: (no-preload-976043) waiting for IP...
I0127 11:42:13.746381 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:13.746824 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:13.746909 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:13.746812 397639 retry.go:31] will retry after 204.398172ms: waiting for domain to come up
I0127 11:42:13.953424 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:13.954027 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:13.954091 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:13.953996 397639 retry.go:31] will retry after 235.784526ms: waiting for domain to come up
I0127 11:42:14.191602 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:14.192187 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:14.192227 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:14.192155 397639 retry.go:31] will retry after 427.633149ms: waiting for domain to come up
I0127 11:42:14.621752 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:14.622243 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:14.622296 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:14.622209 397639 retry.go:31] will retry after 570.191522ms: waiting for domain to come up
I0127 11:42:15.193966 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:15.194462 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:15.194494 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:15.194431 397639 retry.go:31] will retry after 543.673911ms: waiting for domain to come up
I0127 11:42:15.739921 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:15.740528 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:15.740569 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:15.740451 397639 retry.go:31] will retry after 783.899267ms: waiting for domain to come up
I0127 11:42:16.526619 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:16.527159 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:16.527192 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:16.527133 397639 retry.go:31] will retry after 965.500175ms: waiting for domain to come up
I0127 11:42:17.494011 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:17.494568 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:17.494600 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:17.494542 397639 retry.go:31] will retry after 958.680685ms: waiting for domain to come up
I0127 11:42:18.454599 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:18.455062 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:18.455095 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:18.455018 397639 retry.go:31] will retry after 1.186565059s: waiting for domain to come up
I0127 11:42:19.643447 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:19.644022 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:19.644056 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:19.643978 397639 retry.go:31] will retry after 2.293858726s: waiting for domain to come up
I0127 11:42:21.940384 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:21.940868 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:21.940893 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:21.940830 397639 retry.go:31] will retry after 2.796298468s: waiting for domain to come up
I0127 11:42:24.738798 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:24.739380 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:24.739407 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:24.739332 397639 retry.go:31] will retry after 2.553260317s: waiting for domain to come up
I0127 11:42:27.295899 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:27.296395 397538 main.go:141] libmachine: (no-preload-976043) DBG | unable to find current IP address of domain no-preload-976043 in network mk-no-preload-976043
I0127 11:42:27.296425 397538 main.go:141] libmachine: (no-preload-976043) DBG | I0127 11:42:27.296366 397639 retry.go:31] will retry after 3.879381748s: waiting for domain to come up
I0127 11:42:31.179806 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.180316 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has current primary IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.180346 397538 main.go:141] libmachine: (no-preload-976043) found domain IP: 192.168.72.171
I0127 11:42:31.180359 397538 main.go:141] libmachine: (no-preload-976043) reserving static IP address...
I0127 11:42:31.180964 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "no-preload-976043", mac: "52:54:00:f9:a3:49", ip: "192.168.72.171"} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.180995 397538 main.go:141] libmachine: (no-preload-976043) DBG | skip adding static IP to network mk-no-preload-976043 - found existing host DHCP lease matching {name: "no-preload-976043", mac: "52:54:00:f9:a3:49", ip: "192.168.72.171"}
I0127 11:42:31.181015 397538 main.go:141] libmachine: (no-preload-976043) DBG | Getting to WaitForSSH function...
I0127 11:42:31.181029 397538 main.go:141] libmachine: (no-preload-976043) reserved static IP address 192.168.72.171 for domain no-preload-976043
I0127 11:42:31.181041 397538 main.go:141] libmachine: (no-preload-976043) waiting for SSH...
I0127 11:42:31.183228 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.183587 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.183622 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.183715 397538 main.go:141] libmachine: (no-preload-976043) DBG | Using SSH client type: external
I0127 11:42:31.183751 397538 main.go:141] libmachine: (no-preload-976043) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa (-rw-------)
I0127 11:42:31.183792 397538 main.go:141] libmachine: (no-preload-976043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 11:42:31.183814 397538 main.go:141] libmachine: (no-preload-976043) DBG | About to run SSH command:
I0127 11:42:31.183827 397538 main.go:141] libmachine: (no-preload-976043) DBG | exit 0
I0127 11:42:31.318714 397538 main.go:141] libmachine: (no-preload-976043) DBG | SSH cmd err, output: <nil>:
I0127 11:42:31.319136 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetConfigRaw
I0127 11:42:31.319870 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
I0127 11:42:31.322940 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.323480 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.323521 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.323843 397538 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/config.json ...
I0127 11:42:31.324072 397538 machine.go:93] provisionDockerMachine start ...
I0127 11:42:31.324100 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:31.324326 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:31.326993 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.327388 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.327430 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.327562 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:31.327756 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.327911 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.328079 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:31.328250 397538 main.go:141] libmachine: Using SSH client type: native
I0127 11:42:31.328499 397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.171 22 <nil> <nil>}
I0127 11:42:31.328514 397538 main.go:141] libmachine: About to run SSH command:
hostname
I0127 11:42:31.441955 397538 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 11:42:31.441997 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
I0127 11:42:31.442241 397538 buildroot.go:166] provisioning hostname "no-preload-976043"
I0127 11:42:31.442273 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
I0127 11:42:31.442470 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:31.445399 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.445826 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.445876 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.446028 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:31.446221 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.446409 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.446567 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:31.446771 397538 main.go:141] libmachine: Using SSH client type: native
I0127 11:42:31.447006 397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.171 22 <nil> <nil>}
I0127 11:42:31.447035 397538 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-976043 && echo "no-preload-976043" | sudo tee /etc/hostname
I0127 11:42:31.580453 397538 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-976043
I0127 11:42:31.580482 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:31.583587 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.584015 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.584048 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.584245 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:31.584455 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.584634 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.584792 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:31.584993 397538 main.go:141] libmachine: Using SSH client type: native
I0127 11:42:31.585198 397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.171 22 <nil> <nil>}
I0127 11:42:31.585214 397538 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-976043' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-976043/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-976043' | sudo tee -a /etc/hosts;
fi
fi
I0127 11:42:31.716612 397538 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 11:42:31.716642 397538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
I0127 11:42:31.716670 397538 buildroot.go:174] setting up certificates
I0127 11:42:31.716692 397538 provision.go:84] configureAuth start
I0127 11:42:31.716705 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetMachineName
I0127 11:42:31.716947 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
I0127 11:42:31.719415 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.719764 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.719792 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.719947 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:31.722391 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.722749 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.722785 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.722889 397538 provision.go:143] copyHostCerts
I0127 11:42:31.722959 397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
I0127 11:42:31.722983 397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
I0127 11:42:31.723052 397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
I0127 11:42:31.723254 397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
I0127 11:42:31.723269 397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
I0127 11:42:31.723310 397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
I0127 11:42:31.723433 397538 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
I0127 11:42:31.723445 397538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
I0127 11:42:31.723472 397538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
I0127 11:42:31.723554 397538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.no-preload-976043 san=[127.0.0.1 192.168.72.171 localhost minikube no-preload-976043]
I0127 11:42:31.833389 397538 provision.go:177] copyRemoteCerts
I0127 11:42:31.833431 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 11:42:31.833447 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:31.835718 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.836017 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:31.836052 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:31.836187 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:31.836346 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:31.836440 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:31.836572 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:42:31.921804 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 11:42:31.951153 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 11:42:31.975304 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 11:42:32.001708 397538 provision.go:87] duration metric: took 285.00385ms to configureAuth
I0127 11:42:32.001735 397538 buildroot.go:189] setting minikube options for container-runtime
I0127 11:42:32.001975 397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:42:32.001991 397538 machine.go:96] duration metric: took 677.901023ms to provisionDockerMachine
I0127 11:42:32.002002 397538 start.go:293] postStartSetup for "no-preload-976043" (driver="kvm2")
I0127 11:42:32.002016 397538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 11:42:32.002050 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:32.002346 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 11:42:32.002381 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:32.004762 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.005177 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:32.005204 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.005363 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:32.005527 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:32.005695 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:32.005837 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:42:32.091456 397538 ssh_runner.go:195] Run: cat /etc/os-release
I0127 11:42:32.095413 397538 info.go:137] Remote host: Buildroot 2023.02.9
I0127 11:42:32.095437 397538 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
I0127 11:42:32.095495 397538 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
I0127 11:42:32.095611 397538 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
I0127 11:42:32.095716 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 11:42:32.104408 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
I0127 11:42:32.132033 397538 start.go:296] duration metric: took 130.01876ms for postStartSetup
I0127 11:42:32.132073 397538 fix.go:56] duration metric: took 18.757840228s for fixHost
I0127 11:42:32.132095 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:32.134785 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.135163 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:32.135207 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.135362 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:32.135547 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:32.135716 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:32.135842 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:32.136011 397538 main.go:141] libmachine: Using SSH client type: native
I0127 11:42:32.136169 397538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.171 22 <nil> <nil>}
I0127 11:42:32.136179 397538 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 11:42:32.254483 397538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978152.223308330
I0127 11:42:32.254513 397538 fix.go:216] guest clock: 1737978152.223308330
I0127 11:42:32.254523 397538 fix.go:229] Guest: 2025-01-27 11:42:32.22330833 +0000 UTC Remote: 2025-01-27 11:42:32.132078506 +0000 UTC m=+29.072718026 (delta=91.229824ms)
I0127 11:42:32.254550 397538 fix.go:200] guest clock delta is within tolerance: 91.229824ms
I0127 11:42:32.254569 397538 start.go:83] releasing machines lock for "no-preload-976043", held for 18.8803625s
I0127 11:42:32.254605 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:32.254908 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
I0127 11:42:32.257822 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.258236 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:32.258285 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.258394 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:32.258871 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:32.259051 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:42:32.259220 397538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 11:42:32.259249 397538 ssh_runner.go:195] Run: cat /version.json
I0127 11:42:32.259275 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:32.259288 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:42:32.262161 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.262395 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.262559 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:32.262581 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.262752 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:32.262779 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:32.262815 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:32.262996 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:42:32.263004 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:32.263132 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:42:32.263184 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:32.263268 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:42:32.263389 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:42:32.263411 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:42:32.352620 397538 ssh_runner.go:195] Run: systemctl --version
I0127 11:42:32.377317 397538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 11:42:32.385395 397538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 11:42:32.385502 397538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 11:42:32.407057 397538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 11:42:32.407117 397538 start.go:495] detecting cgroup driver to use...
I0127 11:42:32.407191 397538 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 11:42:32.446250 397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 11:42:32.463378 397538 docker.go:217] disabling cri-docker service (if available) ...
I0127 11:42:32.463426 397538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 11:42:32.483338 397538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 11:42:32.500144 397538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 11:42:32.627382 397538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 11:42:32.776344 397538 docker.go:233] disabling docker service ...
I0127 11:42:32.776436 397538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 11:42:32.794188 397538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 11:42:32.805919 397538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 11:42:32.949317 397538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 11:42:33.103404 397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 11:42:33.117381 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 11:42:33.136381 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 11:42:33.148097 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 11:42:33.158937 397538 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 11:42:33.159019 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 11:42:33.170771 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 11:42:33.182634 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 11:42:33.193218 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 11:42:33.204370 397538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 11:42:33.216100 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 11:42:33.227506 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 11:42:33.241630 397538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 11:42:33.256006 397538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 11:42:33.266448 397538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 11:42:33.266499 397538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 11:42:33.281767 397538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 11:42:33.294330 397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:42:33.435848 397538 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 11:42:33.472738 397538 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 11:42:33.472814 397538 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 11:42:33.480275 397538 retry.go:31] will retry after 867.114584ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 11:42:34.347647 397538 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 11:42:34.354396 397538 start.go:563] Will wait 60s for crictl version
I0127 11:42:34.354470 397538 ssh_runner.go:195] Run: which crictl
I0127 11:42:34.359304 397538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 11:42:34.409380 397538 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 11:42:34.409467 397538 ssh_runner.go:195] Run: containerd --version
I0127 11:42:34.446918 397538 ssh_runner.go:195] Run: containerd --version
I0127 11:42:34.479052 397538 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 11:42:34.480411 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetIP
I0127 11:42:34.483298 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:34.483754 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:42:34.483792 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:42:34.484023 397538 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0127 11:42:34.489211 397538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 11:42:34.507169 397538 kubeadm.go:883] updating cluster {Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 11:42:34.507326 397538 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:42:34.507375 397538 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:42:34.545992 397538 containerd.go:627] all images are preloaded for containerd runtime.
I0127 11:42:34.546022 397538 cache_images.go:84] Images are preloaded, skipping loading
I0127 11:42:34.546033 397538 kubeadm.go:934] updating node { 192.168.72.171 8443 v1.32.1 containerd true true} ...
I0127 11:42:34.546165 397538 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-976043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.171
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 11:42:34.546245 397538 ssh_runner.go:195] Run: sudo crictl info
I0127 11:42:34.584023 397538 cni.go:84] Creating CNI manager for ""
I0127 11:42:34.584050 397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 11:42:34.584063 397538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 11:42:34.584095 397538 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.171 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-976043 NodeName:no-preload-976043 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 11:42:34.584295 397538 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.171
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-976043"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.171"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.171"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 11:42:34.584375 397538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 11:42:34.596555 397538 binaries.go:44] Found k8s binaries, skipping transfer
I0127 11:42:34.596623 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 11:42:34.609790 397538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0127 11:42:34.630604 397538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 11:42:34.647666 397538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
I0127 11:42:34.667837 397538 ssh_runner.go:195] Run: grep 192.168.72.171 control-plane.minikube.internal$ /etc/hosts
I0127 11:42:34.671757 397538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.171 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 11:42:34.688584 397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:42:34.820236 397538 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 11:42:34.843186 397538 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043 for IP: 192.168.72.171
I0127 11:42:34.843216 397538 certs.go:194] generating shared ca certs ...
I0127 11:42:34.843239 397538 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:42:34.843444 397538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
I0127 11:42:34.843494 397538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
I0127 11:42:34.843503 397538 certs.go:256] generating profile certs ...
I0127 11:42:34.843580 397538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/client.key
I0127 11:42:34.843655 397538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.key.6127f777
I0127 11:42:34.843711 397538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.key
I0127 11:42:34.843854 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
W0127 11:42:34.843887 397538 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
I0127 11:42:34.843909 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
I0127 11:42:34.843952 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
I0127 11:42:34.843978 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
I0127 11:42:34.843999 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
I0127 11:42:34.844039 397538 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
I0127 11:42:34.844726 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 11:42:34.897545 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 11:42:34.930839 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 11:42:34.965272 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 11:42:34.993738 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 11:42:35.022783 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 11:42:35.049813 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 11:42:35.082422 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/no-preload-976043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0127 11:42:35.111230 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 11:42:35.140492 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
I0127 11:42:35.169716 397538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
I0127 11:42:35.193880 397538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 11:42:35.213782 397538 ssh_runner.go:195] Run: openssl version
I0127 11:42:35.220718 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 11:42:35.232357 397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 11:42:35.238360 397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
I0127 11:42:35.238422 397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 11:42:35.246146 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 11:42:35.260062 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
I0127 11:42:35.271431 397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
I0127 11:42:35.275997 397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
I0127 11:42:35.276061 397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
I0127 11:42:35.282125 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
I0127 11:42:35.295982 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
I0127 11:42:35.309951 397538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
I0127 11:42:35.314700 397538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
I0127 11:42:35.314777 397538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
I0127 11:42:35.320540 397538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
I0127 11:42:35.334666 397538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 11:42:35.340491 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 11:42:35.346356 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 11:42:35.353945 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 11:42:35.361660 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 11:42:35.368995 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 11:42:35.376407 397538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 11:42:35.383741 397538 kubeadm.go:392] StartCluster: {Name:no-preload-976043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-976043 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 11:42:35.383847 397538 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 11:42:35.383915 397538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 11:42:35.436368 397538 cri.go:89] found id: "592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb"
I0127 11:42:35.436386 397538 cri.go:89] found id: "092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9"
I0127 11:42:35.436391 397538 cri.go:89] found id: "f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81"
I0127 11:42:35.436396 397538 cri.go:89] found id: "cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca"
I0127 11:42:35.436399 397538 cri.go:89] found id: "2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c"
I0127 11:42:35.436404 397538 cri.go:89] found id: "3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41"
I0127 11:42:35.436409 397538 cri.go:89] found id: "4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec"
I0127 11:42:35.436413 397538 cri.go:89] found id: "7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa"
I0127 11:42:35.436418 397538 cri.go:89] found id: ""
I0127 11:42:35.436461 397538 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 11:42:35.454110 397538 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T11:42:35Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 11:42:35.454187 397538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 11:42:35.464979 397538 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 11:42:35.464999 397538 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 11:42:35.465040 397538 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 11:42:35.477379 397538 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 11:42:35.478050 397538 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-976043" does not appear in /home/jenkins/minikube-integration/20319-348858/kubeconfig
I0127 11:42:35.478400 397538 kubeconfig.go:62] /home/jenkins/minikube-integration/20319-348858/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-976043" cluster setting kubeconfig missing "no-preload-976043" context setting]
I0127 11:42:35.478926 397538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:42:35.480345 397538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 11:42:35.491256 397538 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.171
I0127 11:42:35.491286 397538 kubeadm.go:1160] stopping kube-system containers ...
I0127 11:42:35.491301 397538 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 11:42:35.491346 397538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 11:42:35.533388 397538 cri.go:89] found id: "592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb"
I0127 11:42:35.533416 397538 cri.go:89] found id: "092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9"
I0127 11:42:35.533422 397538 cri.go:89] found id: "f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81"
I0127 11:42:35.533428 397538 cri.go:89] found id: "cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca"
I0127 11:42:35.533432 397538 cri.go:89] found id: "2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c"
I0127 11:42:35.533438 397538 cri.go:89] found id: "3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41"
I0127 11:42:35.533453 397538 cri.go:89] found id: "4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec"
I0127 11:42:35.533458 397538 cri.go:89] found id: "7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa"
I0127 11:42:35.533462 397538 cri.go:89] found id: ""
I0127 11:42:35.533469 397538 cri.go:252] Stopping containers: [592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb 092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9 f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81 cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca 2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c 3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41 4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec 7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa]
I0127 11:42:35.533525 397538 ssh_runner.go:195] Run: which crictl
I0127 11:42:35.537866 397538 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 592526ef804938a6c1c336b289cd8827d738d6311e20cc1c6faea6e7a38ddafb 092bc5ec03dc81a509614dd4608faacb928e67005952659d9331f62a97f079d9 f845360ca5e3d739fde48598fe03a808590cbf150c4bf3148b318621f8d63d81 cedb41d0de988e1cddd2b9e34ef09066434b9415da107f3ec047f2981ee476ca 2aa7389ef61cc9d25cd698ba69252c55f65a55700ce26de817ee1de43120108c 3fd3c19397b2e924ba0e4556f2c9377eccdc58314ca8d2bdcf32db10b478ae41 4c1ad43ef803c9766b730638f334f3a0c9a8d763435da1e2ffb842c2761df8ec 7fe1f69096846344beae6da8d2abc2e0ced625ec110150d7398131c8ba421daa
I0127 11:42:35.577379 397538 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 11:42:35.594728 397538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 11:42:35.605636 397538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 11:42:35.605659 397538 kubeadm.go:157] found existing configuration files:
I0127 11:42:35.605702 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 11:42:35.617924 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 11:42:35.617977 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 11:42:35.630441 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 11:42:35.640581 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 11:42:35.640628 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 11:42:35.650822 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 11:42:35.662986 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 11:42:35.663034 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 11:42:35.675243 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 11:42:35.687128 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 11:42:35.687177 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 11:42:35.699749 397538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 11:42:35.712592 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:35.847944 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:36.870825 397538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.022832506s)
I0127 11:42:36.870862 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:37.118281 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:37.230184 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:37.368659 397538 api_server.go:52] waiting for apiserver process to appear ...
I0127 11:42:37.368754 397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:42:37.868881 397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:42:38.369735 397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:42:38.391274 397538 api_server.go:72] duration metric: took 1.022614421s to wait for apiserver process to appear ...
I0127 11:42:38.391309 397538 api_server.go:88] waiting for apiserver healthz status ...
I0127 11:42:38.391336 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:38.391882 397538 api_server.go:269] stopped: https://192.168.72.171:8443/healthz: Get "https://192.168.72.171:8443/healthz": dial tcp 192.168.72.171:8443: connect: connection refused
I0127 11:42:38.892089 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:41.606183 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 11:42:41.606224 397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 11:42:41.606249 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:41.637709 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 11:42:41.637740 397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 11:42:41.892205 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:41.901055 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 11:42:41.901097 397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 11:42:42.391537 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:42.401042 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 11:42:42.401077 397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 11:42:42.891470 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:42.919612 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 11:42:42.919641 397538 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 11:42:43.391456 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:42:43.397718 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 200:
ok
I0127 11:42:43.405498 397538 api_server.go:141] control plane version: v1.32.1
I0127 11:42:43.405531 397538 api_server.go:131] duration metric: took 5.014213795s to wait for apiserver health ...
I0127 11:42:43.405544 397538 cni.go:84] Creating CNI manager for ""
I0127 11:42:43.405555 397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 11:42:43.407066 397538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 11:42:43.408189 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 11:42:43.421042 397538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 11:42:43.441442 397538 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 11:42:43.453790 397538 system_pods.go:59] 8 kube-system pods found
I0127 11:42:43.453826 397538 system_pods.go:61] "coredns-668d6bf9bc-kl7br" [4c9a4a3c-b46d-43ea-8ecb-13ad6e04d183] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 11:42:43.453833 397538 system_pods.go:61] "etcd-no-preload-976043" [bf71a082-71be-41b6-b3c9-662972866d48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 11:42:43.453840 397538 system_pods.go:61] "kube-apiserver-no-preload-976043" [73449d58-727b-41f5-b151-5f2d84a608a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 11:42:43.453849 397538 system_pods.go:61] "kube-controller-manager-no-preload-976043" [f1cb08d8-d445-4ea9-b742-02cb993145e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 11:42:43.453858 397538 system_pods.go:61] "kube-proxy-hbtts" [5c3f5981-4c7c-4a09-b11e-5130a4bcc58b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 11:42:43.453872 397538 system_pods.go:61] "kube-scheduler-no-preload-976043" [71129e30-f010-47a1-94e2-da06808e6cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 11:42:43.453888 397538 system_pods.go:61] "metrics-server-f79f97bbb-kd26p" [331dbc70-7767-4514-bae7-7de96157962b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 11:42:43.453901 397538 system_pods.go:61] "storage-provisioner" [29f19d3c-f21f-48e5-8e94-1a62782873de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 11:42:43.453913 397538 system_pods.go:74] duration metric: took 12.448185ms to wait for pod list to return data ...
I0127 11:42:43.453929 397538 node_conditions.go:102] verifying NodePressure condition ...
I0127 11:42:43.457750 397538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 11:42:43.457779 397538 node_conditions.go:123] node cpu capacity is 2
I0127 11:42:43.457793 397538 node_conditions.go:105] duration metric: took 3.853672ms to run NodePressure ...
I0127 11:42:43.457815 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 11:42:43.795140 397538 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0127 11:42:43.803094 397538 kubeadm.go:739] kubelet initialised
I0127 11:42:43.803117 397538 kubeadm.go:740] duration metric: took 7.947754ms waiting for restarted kubelet to initialise ...
I0127 11:42:43.803128 397538 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:42:43.813516 397538 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace to be "Ready" ...
I0127 11:42:45.821221 397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
I0127 11:42:48.320697 397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
I0127 11:42:50.824188 397538 pod_ready.go:103] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"False"
I0127 11:42:51.822688 397538 pod_ready.go:93] pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:51.822715 397538 pod_ready.go:82] duration metric: took 8.009170425s for pod "coredns-668d6bf9bc-kl7br" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.822726 397538 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.827704 397538 pod_ready.go:93] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:51.827738 397538 pod_ready.go:82] duration metric: took 5.005165ms for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.827752 397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.832505 397538 pod_ready.go:93] pod "kube-apiserver-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:51.832530 397538 pod_ready.go:82] duration metric: took 4.76871ms for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.832543 397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.837246 397538 pod_ready.go:93] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:51.837266 397538 pod_ready.go:82] duration metric: took 4.715561ms for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.837275 397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hbtts" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.841769 397538 pod_ready.go:93] pod "kube-proxy-hbtts" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:51.841789 397538 pod_ready.go:82] duration metric: took 4.507355ms for pod "kube-proxy-hbtts" in "kube-system" namespace to be "Ready" ...
I0127 11:42:51.841808 397538 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:52.218667 397538 pod_ready.go:93] pod "kube-scheduler-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:42:52.218697 397538 pod_ready.go:82] duration metric: took 376.878504ms for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:42:52.218713 397538 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" ...
I0127 11:42:54.227099 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:42:56.730104 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:42:58.731158 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:01.226937 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:03.227455 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:05.725903 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:08.224728 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:10.225751 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:12.226333 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:14.724890 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:17.227515 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:19.726498 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:22.226150 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:24.724857 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:27.225563 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:29.225653 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:31.725147 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:33.725374 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:36.225540 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:38.724572 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:41.224062 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:43.225491 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:45.723890 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:47.724570 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:49.724802 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:51.725104 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:54.224681 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:56.724258 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:43:58.726811 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:01.225445 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:03.225504 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:05.225804 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:07.724625 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:09.725469 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:12.226469 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:14.724112 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:16.724412 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:18.725198 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:20.725929 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:23.226983 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:25.724639 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:27.725194 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:30.223869 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:32.225658 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:34.724879 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:37.228509 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:39.725386 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:42.225001 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:44.725533 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:47.226857 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:49.724167 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:51.725505 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:53.726155 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:56.225365 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:44:58.724700 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:00.724747 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:03.226195 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:05.723646 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:07.724134 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:09.725928 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:12.225252 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:14.724086 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:16.724383 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:18.725324 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:21.225304 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:23.226569 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:25.724694 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:27.725948 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:29.725998 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:32.225036 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:34.226745 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:36.725662 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:39.226109 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:41.729561 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:44.225033 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:46.226354 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:48.723795 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:50.724244 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:52.725214 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:55.224770 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:57.225423 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:45:59.725101 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:02.225903 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:04.725305 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:06.727299 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:09.225730 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:11.725343 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:14.226106 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:16.226336 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:18.226656 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:20.728233 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:23.225330 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:25.225642 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:27.725596 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:30.225271 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:32.226910 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:34.725753 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:36.726023 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:38.726555 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:41.224361 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:43.226049 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:45.226221 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:47.226574 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:49.732759 397538 pod_ready.go:103] pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace has status "Ready":"False"
I0127 11:46:52.219144 397538 pod_ready.go:82] duration metric: took 4m0.000395098s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" ...
E0127 11:46:52.219176 397538 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-kd26p" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 11:46:52.219202 397538 pod_ready.go:39] duration metric: took 4m8.416062213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:46:52.219242 397538 kubeadm.go:597] duration metric: took 4m16.754235764s to restartPrimaryControlPlane
W0127 11:46:52.219339 397538 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 11:46:52.219373 397538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 11:46:54.231110 397538 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.011708362s)
I0127 11:46:54.231201 397538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 11:46:54.245569 397538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 11:46:54.255544 397538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 11:46:54.265103 397538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 11:46:54.265122 397538 kubeadm.go:157] found existing configuration files:
I0127 11:46:54.265162 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 11:46:54.274787 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 11:46:54.274845 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 11:46:54.284700 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 11:46:54.296043 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 11:46:54.296094 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 11:46:54.306687 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 11:46:54.316592 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 11:46:54.316634 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 11:46:54.327048 397538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 11:46:54.336484 397538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 11:46:54.336575 397538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 11:46:54.346187 397538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 11:46:54.517349 397538 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 11:47:02.511703 397538 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 11:47:02.511780 397538 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 11:47:02.511862 397538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 11:47:02.511994 397538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 11:47:02.512101 397538 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 11:47:02.512189 397538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 11:47:02.513436 397538 out.go:235] - Generating certificates and keys ...
I0127 11:47:02.513528 397538 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 11:47:02.513639 397538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 11:47:02.513744 397538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 11:47:02.513819 397538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 11:47:02.513915 397538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 11:47:02.514010 397538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 11:47:02.514099 397538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 11:47:02.514179 397538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 11:47:02.514281 397538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 11:47:02.514398 397538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 11:47:02.514464 397538 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 11:47:02.514567 397538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 11:47:02.514655 397538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 11:47:02.514739 397538 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 11:47:02.514817 397538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 11:47:02.514903 397538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 11:47:02.514993 397538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 11:47:02.515101 397538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 11:47:02.515191 397538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 11:47:02.516275 397538 out.go:235] - Booting up control plane ...
I0127 11:47:02.516383 397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 11:47:02.516486 397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 11:47:02.516570 397538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 11:47:02.516721 397538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 11:47:02.516858 397538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 11:47:02.516915 397538 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 11:47:02.517091 397538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 11:47:02.517220 397538 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 11:47:02.517310 397538 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.728303ms
I0127 11:47:02.517411 397538 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 11:47:02.517497 397538 kubeadm.go:310] [api-check] The API server is healthy after 5.002592339s
I0127 11:47:02.517660 397538 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 11:47:02.517804 397538 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 11:47:02.517892 397538 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 11:47:02.518080 397538 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-976043 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 11:47:02.518169 397538 kubeadm.go:310] [bootstrap-token] Using token: dgvydd.xna4ynr2hbmwtuzw
I0127 11:47:02.519545 397538 out.go:235] - Configuring RBAC rules ...
I0127 11:47:02.519669 397538 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 11:47:02.519772 397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 11:47:02.519947 397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 11:47:02.520118 397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 11:47:02.520289 397538 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 11:47:02.520423 397538 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 11:47:02.520574 397538 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 11:47:02.520643 397538 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 11:47:02.520712 397538 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 11:47:02.520721 397538 kubeadm.go:310]
I0127 11:47:02.520812 397538 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 11:47:02.520827 397538 kubeadm.go:310]
I0127 11:47:02.520934 397538 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 11:47:02.520947 397538 kubeadm.go:310]
I0127 11:47:02.520980 397538 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 11:47:02.521067 397538 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 11:47:02.521152 397538 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 11:47:02.521167 397538 kubeadm.go:310]
I0127 11:47:02.521247 397538 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 11:47:02.521256 397538 kubeadm.go:310]
I0127 11:47:02.521333 397538 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 11:47:02.521342 397538 kubeadm.go:310]
I0127 11:47:02.521417 397538 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 11:47:02.521541 397538 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 11:47:02.521665 397538 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 11:47:02.521676 397538 kubeadm.go:310]
I0127 11:47:02.521779 397538 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 11:47:02.521880 397538 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 11:47:02.521889 397538 kubeadm.go:310]
I0127 11:47:02.522019 397538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dgvydd.xna4ynr2hbmwtuzw \
I0127 11:47:02.522168 397538 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
I0127 11:47:02.522200 397538 kubeadm.go:310] --control-plane
I0127 11:47:02.522216 397538 kubeadm.go:310]
I0127 11:47:02.522326 397538 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 11:47:02.522336 397538 kubeadm.go:310]
I0127 11:47:02.522448 397538 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dgvydd.xna4ynr2hbmwtuzw \
I0127 11:47:02.522601 397538 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982
I0127 11:47:02.522616 397538 cni.go:84] Creating CNI manager for ""
I0127 11:47:02.522625 397538 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 11:47:02.524672 397538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 11:47:02.525706 397538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 11:47:02.538650 397538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 11:47:02.566811 397538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 11:47:02.566893 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:02.566922 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-976043 minikube.k8s.io/updated_at=2025_01_27T11_47_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=no-preload-976043 minikube.k8s.io/primary=true
I0127 11:47:02.811376 397538 ops.go:34] apiserver oom_adj: -16
I0127 11:47:02.811527 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:03.312022 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:03.812210 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:04.311782 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:04.812533 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:05.312605 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:05.812482 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:06.311649 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:06.811846 397538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:47:06.921923 397538 kubeadm.go:1113] duration metric: took 4.355092744s to wait for elevateKubeSystemPrivileges
I0127 11:47:06.921957 397538 kubeadm.go:394] duration metric: took 4m31.538223966s to StartCluster
I0127 11:47:06.921979 397538 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:47:06.922096 397538 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20319-348858/kubeconfig
I0127 11:47:06.923598 397538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:47:06.923858 397538 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.171 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 11:47:06.923968 397538 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 11:47:06.924085 397538 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:47:06.924072 397538 addons.go:69] Setting storage-provisioner=true in profile "no-preload-976043"
I0127 11:47:06.924104 397538 addons.go:69] Setting dashboard=true in profile "no-preload-976043"
I0127 11:47:06.924119 397538 addons.go:69] Setting metrics-server=true in profile "no-preload-976043"
I0127 11:47:06.924126 397538 addons.go:238] Setting addon storage-provisioner=true in "no-preload-976043"
I0127 11:47:06.924132 397538 addons.go:238] Setting addon dashboard=true in "no-preload-976043"
I0127 11:47:06.924133 397538 addons.go:238] Setting addon metrics-server=true in "no-preload-976043"
W0127 11:47:06.924136 397538 addons.go:247] addon storage-provisioner should already be in state true
W0127 11:47:06.924142 397538 addons.go:247] addon dashboard should already be in state true
W0127 11:47:06.924150 397538 addons.go:247] addon metrics-server should already be in state true
I0127 11:47:06.924096 397538 addons.go:69] Setting default-storageclass=true in profile "no-preload-976043"
I0127 11:47:06.924211 397538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-976043"
I0127 11:47:06.924182 397538 host.go:66] Checking if "no-preload-976043" exists ...
I0127 11:47:06.924182 397538 host.go:66] Checking if "no-preload-976043" exists ...
I0127 11:47:06.924182 397538 host.go:66] Checking if "no-preload-976043" exists ...
I0127 11:47:06.924663 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.924717 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.924792 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.924802 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.924817 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.924838 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.924951 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.924994 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.927481 397538 out.go:177] * Verifying Kubernetes components...
I0127 11:47:06.928798 397538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:47:06.944266 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
I0127 11:47:06.944533 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
I0127 11:47:06.944635 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
I0127 11:47:06.944782 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.945085 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.945253 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
I0127 11:47:06.945607 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.945646 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.945671 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.945722 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.946041 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.946145 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.946203 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.946623 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.946643 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.946742 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.946786 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.946951 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.946994 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.947206 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.947222 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.947394 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:47:06.947712 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.947736 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.948119 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.948785 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.948846 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.951738 397538 addons.go:238] Setting addon default-storageclass=true in "no-preload-976043"
W0127 11:47:06.951759 397538 addons.go:247] addon default-storageclass should already be in state true
I0127 11:47:06.951791 397538 host.go:66] Checking if "no-preload-976043" exists ...
I0127 11:47:06.952140 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.952171 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.973135 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
I0127 11:47:06.973855 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
I0127 11:47:06.974102 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.974240 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.974748 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.974769 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.974883 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.974902 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.975329 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.975608 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:47:06.977046 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.977341 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:47:06.979372 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:47:06.979929 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
I0127 11:47:06.980128 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:47:06.980305 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.980939 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.980953 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.981201 397538 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 11:47:06.981499 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.981883 397538 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 11:47:06.982169 397538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:47:06.982227 397538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:47:06.983298 397538 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 11:47:06.983322 397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 11:47:06.983344 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:47:06.983857 397538 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 11:47:06.985635 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
I0127 11:47:06.986281 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:06.986637 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 11:47:06.986661 397538 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 11:47:06.986683 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:47:06.987067 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:06.987084 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:06.987615 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:06.987933 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:47:06.991679 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:06.992043 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:47:06.992369 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:06.992880 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:47:06.992905 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:06.993076 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:47:06.993192 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:47:06.993217 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:06.993263 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:47:06.993421 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:47:06.993568 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:47:06.993630 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:47:06.993759 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:47:06.993894 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:47:06.994030 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:47:07.001313 397538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
I0127 11:47:07.001677 397538 main.go:141] libmachine: () Calling .GetVersion
I0127 11:47:07.002144 397538 main.go:141] libmachine: Using API Version 1
I0127 11:47:07.002158 397538 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:47:07.002626 397538 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:47:07.002804 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetState
I0127 11:47:07.004433 397538 main.go:141] libmachine: (no-preload-976043) Calling .DriverName
I0127 11:47:07.004630 397538 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 11:47:07.004654 397538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 11:47:07.004666 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:47:07.007710 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:07.008211 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:47:07.008307 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:07.008552 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:47:07.008724 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:47:07.008884 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:47:07.009008 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:47:07.017633 397538 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 11:47:07.018862 397538 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 11:47:07.018884 397538 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 11:47:07.018906 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHHostname
I0127 11:47:07.022158 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:07.022759 397538 main.go:141] libmachine: (no-preload-976043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:a3:49", ip: ""} in network mk-no-preload-976043: {Iface:virbr4 ExpiryTime:2025-01-27 12:38:59 +0000 UTC Type:0 Mac:52:54:00:f9:a3:49 Iaid: IPaddr:192.168.72.171 Prefix:24 Hostname:no-preload-976043 Clientid:01:52:54:00:f9:a3:49}
I0127 11:47:07.022784 397538 main.go:141] libmachine: (no-preload-976043) DBG | domain no-preload-976043 has defined IP address 192.168.72.171 and MAC address 52:54:00:f9:a3:49 in network mk-no-preload-976043
I0127 11:47:07.022955 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHPort
I0127 11:47:07.023096 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHKeyPath
I0127 11:47:07.023241 397538 main.go:141] libmachine: (no-preload-976043) Calling .GetSSHUsername
I0127 11:47:07.023384 397538 sshutil.go:53] new ssh client: &{IP:192.168.72.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/no-preload-976043/id_rsa Username:docker}
I0127 11:47:07.214231 397538 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 11:47:07.237881 397538 node_ready.go:35] waiting up to 6m0s for node "no-preload-976043" to be "Ready" ...
I0127 11:47:07.263158 397538 node_ready.go:49] node "no-preload-976043" has status "Ready":"True"
I0127 11:47:07.263185 397538 node_ready.go:38] duration metric: took 25.243171ms for node "no-preload-976043" to be "Ready" ...
I0127 11:47:07.263198 397538 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:47:07.270196 397538 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace to be "Ready" ...
I0127 11:47:07.341301 397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 11:47:07.358210 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 11:47:07.358235 397538 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 11:47:07.360985 397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 11:47:07.381453 397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 11:47:07.381492 397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 11:47:07.466768 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 11:47:07.466802 397538 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 11:47:07.493189 397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 11:47:07.493219 397538 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 11:47:07.713486 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 11:47:07.713521 397538 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 11:47:07.724092 397538 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 11:47:07.724125 397538 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 11:47:07.769193 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 11:47:07.769227 397538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 11:47:07.846823 397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 11:47:07.935651 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 11:47:07.935684 397538 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 11:47:08.146639 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 11:47:08.146679 397538 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 11:47:08.296867 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 11:47:08.296901 397538 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 11:47:08.392971 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 11:47:08.393017 397538 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 11:47:08.479861 397538 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 11:47:08.479897 397538 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 11:47:08.678114 397538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 11:47:08.977235 397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.63589482s)
I0127 11:47:08.977301 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:08.977323 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:08.977243 397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616166623s)
I0127 11:47:08.977402 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:08.977422 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:08.977652 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:08.977694 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:08.977710 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:08.977720 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:08.977871 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:08.977887 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:08.977896 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:08.977904 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:08.978211 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:08.978228 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:08.979829 397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
I0127 11:47:08.979875 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:08.979882 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:09.000588 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:09.000611 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:09.000859 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:09.000880 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:09.000894 397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
I0127 11:47:09.324488 397538 pod_ready.go:93] pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:09.324521 397538 pod_ready.go:82] duration metric: took 2.054295919s for pod "coredns-668d6bf9bc-5cktj" in "kube-system" namespace to be "Ready" ...
I0127 11:47:09.324537 397538 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace to be "Ready" ...
I0127 11:47:09.402781 397538 pod_ready.go:93] pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:09.402807 397538 pod_ready.go:82] duration metric: took 78.262484ms for pod "coredns-668d6bf9bc-kjqjk" in "kube-system" namespace to be "Ready" ...
I0127 11:47:09.402819 397538 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:09.537430 397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.690554711s)
I0127 11:47:09.537480 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:09.537490 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:09.537841 397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
I0127 11:47:09.537922 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:09.537948 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:09.537959 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:09.537968 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:09.538230 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:09.538246 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:09.538257 397538 addons.go:479] Verifying addon metrics-server=true in "no-preload-976043"
I0127 11:47:10.322468 397538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.644290278s)
I0127 11:47:10.322545 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:10.322564 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:10.323749 397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
I0127 11:47:10.323766 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:10.323841 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:10.323868 397538 main.go:141] libmachine: Making call to close driver server
I0127 11:47:10.323877 397538 main.go:141] libmachine: (no-preload-976043) Calling .Close
I0127 11:47:10.324209 397538 main.go:141] libmachine: (no-preload-976043) DBG | Closing plugin on server side
I0127 11:47:10.324260 397538 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:47:10.324276 397538 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:47:10.326443 397538 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-976043 addons enable metrics-server
I0127 11:47:10.327576 397538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 11:47:10.328699 397538 addons.go:514] duration metric: took 3.404742641s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 11:47:11.408591 397538 pod_ready.go:103] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"False"
I0127 11:47:12.412726 397538 pod_ready.go:93] pod "etcd-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:12.412747 397538 pod_ready.go:82] duration metric: took 3.009921497s for pod "etcd-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:12.412757 397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:12.419108 397538 pod_ready.go:93] pod "kube-apiserver-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:12.419131 397538 pod_ready.go:82] duration metric: took 6.362026ms for pod "kube-apiserver-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:12.419140 397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.425681 397538 pod_ready.go:103] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"False"
I0127 11:47:14.924760 397538 pod_ready.go:93] pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:14.924789 397538 pod_ready.go:82] duration metric: took 2.505641174s for pod "kube-controller-manager-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.924804 397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-44m77" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.930236 397538 pod_ready.go:93] pod "kube-proxy-44m77" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:14.930255 397538 pod_ready.go:82] duration metric: took 5.444724ms for pod "kube-proxy-44m77" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.930264 397538 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.934058 397538 pod_ready.go:93] pod "kube-scheduler-no-preload-976043" in "kube-system" namespace has status "Ready":"True"
I0127 11:47:14.934073 397538 pod_ready.go:82] duration metric: took 3.802556ms for pod "kube-scheduler-no-preload-976043" in "kube-system" namespace to be "Ready" ...
I0127 11:47:14.934081 397538 pod_ready.go:39] duration metric: took 7.670861335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:47:14.934100 397538 api_server.go:52] waiting for apiserver process to appear ...
I0127 11:47:14.934154 397538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:47:14.951237 397538 api_server.go:72] duration metric: took 8.02734538s to wait for apiserver process to appear ...
I0127 11:47:14.951258 397538 api_server.go:88] waiting for apiserver healthz status ...
I0127 11:47:14.951276 397538 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
I0127 11:47:14.958111 397538 api_server.go:279] https://192.168.72.171:8443/healthz returned 200:
ok
I0127 11:47:14.959538 397538 api_server.go:141] control plane version: v1.32.1
I0127 11:47:14.959563 397538 api_server.go:131] duration metric: took 8.296106ms to wait for apiserver health ...
I0127 11:47:14.959572 397538 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 11:47:14.967006 397538 system_pods.go:59] 9 kube-system pods found
I0127 11:47:14.967038 397538 system_pods.go:61] "coredns-668d6bf9bc-5cktj" [def28b6b-a9fa-4385-a844-1f827384e6cd] Running
I0127 11:47:14.967046 397538 system_pods.go:61] "coredns-668d6bf9bc-kjqjk" [14e1705d-7ee7-407f-a266-3c17da987f44] Running
I0127 11:47:14.967052 397538 system_pods.go:61] "etcd-no-preload-976043" [4ac8056f-a0f1-4004-9714-274d6bb1c966] Running
I0127 11:47:14.967059 397538 system_pods.go:61] "kube-apiserver-no-preload-976043" [ebf8e215-aa94-48b0-9951-c708fbe949f2] Running
I0127 11:47:14.967064 397538 system_pods.go:61] "kube-controller-manager-no-preload-976043" [cec6a288-312c-44f5-917a-2a2af911f261] Running
I0127 11:47:14.967070 397538 system_pods.go:61] "kube-proxy-44m77" [43e9e383-ae16-4265-9e7e-199b1adb4ac2] Running
I0127 11:47:14.967079 397538 system_pods.go:61] "kube-scheduler-no-preload-976043" [61f17854-a314-46ff-a7ab-6b2fca507dc6] Running
I0127 11:47:14.967089 397538 system_pods.go:61] "metrics-server-f79f97bbb-cxprr" [fcf4fd1c-5cc8-43ab-a46a-32c4f5559168] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 11:47:14.967105 397538 system_pods.go:61] "storage-provisioner" [8cc9c314-b668-4b0d-8d54-53a058019e73] Running
I0127 11:47:14.967124 397538 system_pods.go:74] duration metric: took 7.544376ms to wait for pod list to return data ...
I0127 11:47:14.967135 397538 default_sa.go:34] waiting for default service account to be created ...
I0127 11:47:14.969819 397538 default_sa.go:45] found service account: "default"
I0127 11:47:14.969846 397538 default_sa.go:55] duration metric: took 2.703478ms for default service account to be created ...
I0127 11:47:14.969856 397538 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 11:47:15.077668 397538 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-976043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-976043 -n no-preload-976043
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-976043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-976043 logs -n 25: (1.231231315s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
| ssh | -p bridge-230154 sudo iptables | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | -t nat -L -n -v | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl status kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p bridge-230154 sudo docker | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | |
| | system info | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | cri-dockerd --version | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p bridge-230154 sudo cat | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | containerd config dump | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p bridge-230154 sudo | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p bridge-230154 sudo find | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p bridge-230154 sudo crio | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
| | config | | | | | |
| delete | -p bridge-230154 | bridge-230154 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 11:51:47
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 11:51:47.607978 410030 out.go:345] Setting OutFile to fd 1 ...
I0127 11:51:47.608091 410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:51:47.608100 410030 out.go:358] Setting ErrFile to fd 2...
I0127 11:51:47.608109 410030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:51:47.608278 410030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-348858/.minikube/bin
I0127 11:51:47.608812 410030 out.go:352] Setting JSON to false
I0127 11:51:47.609953 410030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9253,"bootTime":1737969455,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 11:51:47.610057 410030 start.go:139] virtualization: kvm guest
I0127 11:51:47.611895 410030 out.go:177] * [bridge-230154] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 11:51:47.613441 410030 notify.go:220] Checking for updates...
I0127 11:51:47.613479 410030 out.go:177] - MINIKUBE_LOCATION=20319
I0127 11:51:47.614719 410030 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 11:51:47.615971 410030 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20319-348858/kubeconfig
I0127 11:51:47.617111 410030 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-348858/.minikube
I0127 11:51:47.618157 410030 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 11:51:47.619361 410030 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 11:51:47.620941 410030 config.go:182] Loaded profile config "default-k8s-diff-port-259716": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:51:47.621061 410030 config.go:182] Loaded profile config "flannel-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:51:47.621206 410030 config.go:182] Loaded profile config "no-preload-976043": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:51:47.621328 410030 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 11:51:47.658431 410030 out.go:177] * Using the kvm2 driver based on user configuration
I0127 11:51:47.659436 410030 start.go:297] selected driver: kvm2
I0127 11:51:47.659452 410030 start.go:901] validating driver "kvm2" against <nil>
I0127 11:51:47.659462 410030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 11:51:47.660244 410030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:51:47.660346 410030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20319-348858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 11:51:47.676075 410030 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 11:51:47.676119 410030 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0127 11:51:47.676407 410030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 11:51:47.676445 410030 cni.go:84] Creating CNI manager for "bridge"
I0127 11:51:47.676456 410030 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0127 11:51:47.676521 410030 start.go:340] cluster config:
{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
I0127 11:51:47.676642 410030 iso.go:125] acquiring lock: {Name:mk6cdd2a3d0bfb3682c1f0c806368944f23c4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 11:51:47.677997 410030 out.go:177] * Starting "bridge-230154" primary control-plane node in "bridge-230154" cluster
I0127 11:51:47.678894 410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:51:47.678924 410030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 11:51:47.678936 410030 cache.go:56] Caching tarball of preloaded images
I0127 11:51:47.679024 410030 preload.go:172] Found /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 11:51:47.679037 410030 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 11:51:47.679160 410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
I0127 11:51:47.679185 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json: {Name:mk2b6cd63816fa28cdffe5707c10ed7a16feb9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:51:47.679337 410030 start.go:360] acquireMachinesLock for bridge-230154: {Name:mk69dba1a41baeb0794a28159a5cef220370e224 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 11:51:47.679375 410030 start.go:364] duration metric: took 23.748µs to acquireMachinesLock for "bridge-230154"
I0127 11:51:47.679398 410030 start.go:93] Provisioning new machine with config: &{Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 11:51:47.679474 410030 start.go:125] createHost starting for "" (driver="kvm2")
I0127 11:51:46.323131 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:48.324596 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:47.680780 410030 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I0127 11:51:47.680920 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:51:47.680961 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:51:47.695019 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
I0127 11:51:47.695469 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:51:47.696023 410030 main.go:141] libmachine: Using API Version 1
I0127 11:51:47.696045 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:51:47.696373 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:51:47.696603 410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
I0127 11:51:47.696816 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:51:47.696969 410030 start.go:159] libmachine.API.Create for "bridge-230154" (driver="kvm2")
I0127 11:51:47.696999 410030 client.go:168] LocalClient.Create starting
I0127 11:51:47.697034 410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem
I0127 11:51:47.697071 410030 main.go:141] libmachine: Decoding PEM data...
I0127 11:51:47.697092 410030 main.go:141] libmachine: Parsing certificate...
I0127 11:51:47.697163 410030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem
I0127 11:51:47.697192 410030 main.go:141] libmachine: Decoding PEM data...
I0127 11:51:47.697220 410030 main.go:141] libmachine: Parsing certificate...
I0127 11:51:47.697248 410030 main.go:141] libmachine: Running pre-create checks...
I0127 11:51:47.697262 410030 main.go:141] libmachine: (bridge-230154) Calling .PreCreateCheck
I0127 11:51:47.697637 410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
I0127 11:51:47.698098 410030 main.go:141] libmachine: Creating machine...
I0127 11:51:47.698113 410030 main.go:141] libmachine: (bridge-230154) Calling .Create
I0127 11:51:47.698255 410030 main.go:141] libmachine: (bridge-230154) creating KVM machine...
I0127 11:51:47.698270 410030 main.go:141] libmachine: (bridge-230154) creating network...
I0127 11:51:47.699710 410030 main.go:141] libmachine: (bridge-230154) DBG | found existing default KVM network
I0127 11:51:47.701093 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.700951 410053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:bc:42} reservation:<nil>}
I0127 11:51:47.702050 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.701955 410053 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:a8:75} reservation:<nil>}
I0127 11:51:47.703137 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.703062 410053 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000287220}
I0127 11:51:47.703226 410030 main.go:141] libmachine: (bridge-230154) DBG | created network xml:
I0127 11:51:47.703248 410030 main.go:141] libmachine: (bridge-230154) DBG | <network>
I0127 11:51:47.703258 410030 main.go:141] libmachine: (bridge-230154) DBG | <name>mk-bridge-230154</name>
I0127 11:51:47.703285 410030 main.go:141] libmachine: (bridge-230154) DBG | <dns enable='no'/>
I0127 11:51:47.703298 410030 main.go:141] libmachine: (bridge-230154) DBG |
I0127 11:51:47.703306 410030 main.go:141] libmachine: (bridge-230154) DBG | <ip address='192.168.61.1' netmask='255.255.255.0'>
I0127 11:51:47.703321 410030 main.go:141] libmachine: (bridge-230154) DBG | <dhcp>
I0127 11:51:47.703334 410030 main.go:141] libmachine: (bridge-230154) DBG | <range start='192.168.61.2' end='192.168.61.253'/>
I0127 11:51:47.703345 410030 main.go:141] libmachine: (bridge-230154) DBG | </dhcp>
I0127 11:51:47.703361 410030 main.go:141] libmachine: (bridge-230154) DBG | </ip>
I0127 11:51:47.703384 410030 main.go:141] libmachine: (bridge-230154) DBG |
I0127 11:51:47.703400 410030 main.go:141] libmachine: (bridge-230154) DBG | </network>
I0127 11:51:47.703410 410030 main.go:141] libmachine: (bridge-230154) DBG |
I0127 11:51:47.707961 410030 main.go:141] libmachine: (bridge-230154) DBG | trying to create private KVM network mk-bridge-230154 192.168.61.0/24...
I0127 11:51:47.780019 410030 main.go:141] libmachine: (bridge-230154) DBG | private KVM network mk-bridge-230154 192.168.61.0/24 created
I0127 11:51:47.780050 410030 main.go:141] libmachine: (bridge-230154) setting up store path in /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
I0127 11:51:47.780064 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:47.779969 410053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20319-348858/.minikube
I0127 11:51:47.780075 410030 main.go:141] libmachine: (bridge-230154) building disk image from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0127 11:51:47.780095 410030 main.go:141] libmachine: (bridge-230154) Downloading /home/jenkins/minikube-integration/20319-348858/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20319-348858/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0127 11:51:48.077713 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.077516 410053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa...
I0127 11:51:48.209215 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209093 410053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk...
I0127 11:51:48.209256 410030 main.go:141] libmachine: (bridge-230154) DBG | Writing magic tar header
I0127 11:51:48.209272 410030 main.go:141] libmachine: (bridge-230154) DBG | Writing SSH key tar header
I0127 11:51:48.209286 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.209206 410053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 ...
I0127 11:51:48.209303 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154
I0127 11:51:48.209343 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154 (perms=drwx------)
I0127 11:51:48.209355 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube/machines (perms=drwxr-xr-x)
I0127 11:51:48.209368 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube/machines
I0127 11:51:48.209389 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858/.minikube
I0127 11:51:48.209411 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858/.minikube (perms=drwxr-xr-x)
I0127 11:51:48.209424 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration/20319-348858 (perms=drwxrwxr-x)
I0127 11:51:48.209432 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0127 11:51:48.209444 410030 main.go:141] libmachine: (bridge-230154) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0127 11:51:48.209455 410030 main.go:141] libmachine: (bridge-230154) creating domain...
I0127 11:51:48.209468 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20319-348858
I0127 11:51:48.209481 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0127 11:51:48.209495 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home/jenkins
I0127 11:51:48.209503 410030 main.go:141] libmachine: (bridge-230154) DBG | checking permissions on dir: /home
I0127 11:51:48.209510 410030 main.go:141] libmachine: (bridge-230154) DBG | skipping /home - not owner
I0127 11:51:48.210458 410030 main.go:141] libmachine: (bridge-230154) define libvirt domain using xml:
I0127 11:51:48.210486 410030 main.go:141] libmachine: (bridge-230154) <domain type='kvm'>
I0127 11:51:48.210494 410030 main.go:141] libmachine: (bridge-230154) <name>bridge-230154</name>
I0127 11:51:48.210500 410030 main.go:141] libmachine: (bridge-230154) <memory unit='MiB'>3072</memory>
I0127 11:51:48.210504 410030 main.go:141] libmachine: (bridge-230154) <vcpu>2</vcpu>
I0127 11:51:48.210509 410030 main.go:141] libmachine: (bridge-230154) <features>
I0127 11:51:48.210519 410030 main.go:141] libmachine: (bridge-230154) <acpi/>
I0127 11:51:48.210526 410030 main.go:141] libmachine: (bridge-230154) <apic/>
I0127 11:51:48.210531 410030 main.go:141] libmachine: (bridge-230154) <pae/>
I0127 11:51:48.210535 410030 main.go:141] libmachine: (bridge-230154)
I0127 11:51:48.210542 410030 main.go:141] libmachine: (bridge-230154) </features>
I0127 11:51:48.210549 410030 main.go:141] libmachine: (bridge-230154) <cpu mode='host-passthrough'>
I0127 11:51:48.210554 410030 main.go:141] libmachine: (bridge-230154)
I0127 11:51:48.210560 410030 main.go:141] libmachine: (bridge-230154) </cpu>
I0127 11:51:48.210573 410030 main.go:141] libmachine: (bridge-230154) <os>
I0127 11:51:48.210585 410030 main.go:141] libmachine: (bridge-230154) <type>hvm</type>
I0127 11:51:48.210590 410030 main.go:141] libmachine: (bridge-230154) <boot dev='cdrom'/>
I0127 11:51:48.210595 410030 main.go:141] libmachine: (bridge-230154) <boot dev='hd'/>
I0127 11:51:48.210601 410030 main.go:141] libmachine: (bridge-230154) <bootmenu enable='no'/>
I0127 11:51:48.210607 410030 main.go:141] libmachine: (bridge-230154) </os>
I0127 11:51:48.210612 410030 main.go:141] libmachine: (bridge-230154) <devices>
I0127 11:51:48.210617 410030 main.go:141] libmachine: (bridge-230154) <disk type='file' device='cdrom'>
I0127 11:51:48.210627 410030 main.go:141] libmachine: (bridge-230154) <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/boot2docker.iso'/>
I0127 11:51:48.210631 410030 main.go:141] libmachine: (bridge-230154) <target dev='hdc' bus='scsi'/>
I0127 11:51:48.210639 410030 main.go:141] libmachine: (bridge-230154) <readonly/>
I0127 11:51:48.210643 410030 main.go:141] libmachine: (bridge-230154) </disk>
I0127 11:51:48.210666 410030 main.go:141] libmachine: (bridge-230154) <disk type='file' device='disk'>
I0127 11:51:48.210688 410030 main.go:141] libmachine: (bridge-230154) <driver name='qemu' type='raw' cache='default' io='threads' />
I0127 11:51:48.210711 410030 main.go:141] libmachine: (bridge-230154) <source file='/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/bridge-230154.rawdisk'/>
I0127 11:51:48.210732 410030 main.go:141] libmachine: (bridge-230154) <target dev='hda' bus='virtio'/>
I0127 11:51:48.210743 410030 main.go:141] libmachine: (bridge-230154) </disk>
I0127 11:51:48.210753 410030 main.go:141] libmachine: (bridge-230154) <interface type='network'>
I0127 11:51:48.210760 410030 main.go:141] libmachine: (bridge-230154) <source network='mk-bridge-230154'/>
I0127 11:51:48.210767 410030 main.go:141] libmachine: (bridge-230154) <model type='virtio'/>
I0127 11:51:48.210780 410030 main.go:141] libmachine: (bridge-230154) </interface>
I0127 11:51:48.210787 410030 main.go:141] libmachine: (bridge-230154) <interface type='network'>
I0127 11:51:48.210792 410030 main.go:141] libmachine: (bridge-230154) <source network='default'/>
I0127 11:51:48.210798 410030 main.go:141] libmachine: (bridge-230154) <model type='virtio'/>
I0127 11:51:48.210808 410030 main.go:141] libmachine: (bridge-230154) </interface>
I0127 11:51:48.210825 410030 main.go:141] libmachine: (bridge-230154) <serial type='pty'>
I0127 11:51:48.210834 410030 main.go:141] libmachine: (bridge-230154) <target port='0'/>
I0127 11:51:48.210838 410030 main.go:141] libmachine: (bridge-230154) </serial>
I0127 11:51:48.210847 410030 main.go:141] libmachine: (bridge-230154) <console type='pty'>
I0127 11:51:48.210858 410030 main.go:141] libmachine: (bridge-230154) <target type='serial' port='0'/>
I0127 11:51:48.210867 410030 main.go:141] libmachine: (bridge-230154) </console>
I0127 11:51:48.210878 410030 main.go:141] libmachine: (bridge-230154) <rng model='virtio'>
I0127 11:51:48.210890 410030 main.go:141] libmachine: (bridge-230154) <backend model='random'>/dev/random</backend>
I0127 11:51:48.210898 410030 main.go:141] libmachine: (bridge-230154) </rng>
I0127 11:51:48.210903 410030 main.go:141] libmachine: (bridge-230154)
I0127 11:51:48.210909 410030 main.go:141] libmachine: (bridge-230154)
I0127 11:51:48.210913 410030 main.go:141] libmachine: (bridge-230154) </devices>
I0127 11:51:48.210918 410030 main.go:141] libmachine: (bridge-230154) </domain>
I0127 11:51:48.210926 410030 main.go:141] libmachine: (bridge-230154)
I0127 11:51:48.214625 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:37:b6:92 in network default
I0127 11:51:48.215133 410030 main.go:141] libmachine: (bridge-230154) starting domain...
I0127 11:51:48.215157 410030 main.go:141] libmachine: (bridge-230154) ensuring networks are active...
I0127 11:51:48.215168 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:48.215860 410030 main.go:141] libmachine: (bridge-230154) Ensuring network default is active
I0127 11:51:48.216193 410030 main.go:141] libmachine: (bridge-230154) Ensuring network mk-bridge-230154 is active
I0127 11:51:48.216783 410030 main.go:141] libmachine: (bridge-230154) getting domain XML...
I0127 11:51:48.217458 410030 main.go:141] libmachine: (bridge-230154) creating domain...
I0127 11:51:48.569774 410030 main.go:141] libmachine: (bridge-230154) waiting for IP...
I0127 11:51:48.570778 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:48.571317 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:48.571362 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.571309 410053 retry.go:31] will retry after 222.051521ms: waiting for domain to come up
I0127 11:51:48.794921 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:48.795488 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:48.795532 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:48.795451 410053 retry.go:31] will retry after 300.550406ms: waiting for domain to come up
I0127 11:51:49.098085 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:49.098673 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:49.098705 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.098646 410053 retry.go:31] will retry after 351.204659ms: waiting for domain to come up
I0127 11:51:49.450989 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:49.451523 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:49.451547 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:49.451503 410053 retry.go:31] will retry after 551.090722ms: waiting for domain to come up
I0127 11:51:50.003672 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:50.004175 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:50.004220 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.004153 410053 retry.go:31] will retry after 550.280324ms: waiting for domain to come up
I0127 11:51:50.555950 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:50.556457 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:50.556489 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:50.556430 410053 retry.go:31] will retry after 583.250306ms: waiting for domain to come up
I0127 11:51:51.140978 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:51.141558 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:51.141627 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:51.141533 410053 retry.go:31] will retry after 1.176790151s: waiting for domain to come up
I0127 11:51:52.320049 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:52.320729 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:52.320797 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:52.320689 410053 retry.go:31] will retry after 1.176590374s: waiting for domain to come up
I0127 11:51:50.326882 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:52.823007 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:53.498996 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:53.499617 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:53.499644 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:53.499590 410053 retry.go:31] will retry after 1.435449708s: waiting for domain to come up
I0127 11:51:54.937088 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:54.937656 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:54.937687 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:54.937628 410053 retry.go:31] will retry after 1.670320015s: waiting for domain to come up
I0127 11:51:56.609490 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:56.610076 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:56.610106 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:56.610030 410053 retry.go:31] will retry after 2.430005713s: waiting for domain to come up
I0127 11:51:55.322705 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:57.331001 408290 pod_ready.go:103] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"False"
I0127 11:51:59.822867 408290 pod_ready.go:93] pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace has status "Ready":"True"
I0127 11:51:59.822893 408290 pod_ready.go:82] duration metric: took 18.006590764s for pod "coredns-668d6bf9bc-cxhgb" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.822903 408290 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.827408 408290 pod_ready.go:93] pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace has status "Ready":"True"
I0127 11:51:59.827431 408290 pod_ready.go:82] duration metric: took 4.521822ms for pod "coredns-668d6bf9bc-x26ng" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.827439 408290 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.831731 408290 pod_ready.go:93] pod "etcd-flannel-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:51:59.831754 408290 pod_ready.go:82] duration metric: took 4.307302ms for pod "etcd-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.831766 408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.836455 408290 pod_ready.go:93] pod "kube-apiserver-flannel-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:51:59.836476 408290 pod_ready.go:82] duration metric: took 4.701033ms for pod "kube-apiserver-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.836485 408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.841564 408290 pod_ready.go:93] pod "kube-controller-manager-flannel-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:51:59.841607 408290 pod_ready.go:82] duration metric: took 5.114623ms for pod "kube-controller-manager-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.841619 408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
I0127 11:52:00.221093 408290 pod_ready.go:93] pod "kube-proxy-fwvhb" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:00.221117 408290 pod_ready.go:82] duration metric: took 379.489464ms for pod "kube-proxy-fwvhb" in "kube-system" namespace to be "Ready" ...
I0127 11:52:00.221127 408290 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:51:59.041589 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:51:59.042126 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:51:59.042157 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:51:59.042094 410053 retry.go:31] will retry after 2.320988246s: waiting for domain to come up
I0127 11:52:01.364475 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:01.365092 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:52:01.365148 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:01.365068 410053 retry.go:31] will retry after 4.110080679s: waiting for domain to come up
I0127 11:52:00.620378 408290 pod_ready.go:93] pod "kube-scheduler-flannel-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:00.620412 408290 pod_ready.go:82] duration metric: took 399.276857ms for pod "kube-scheduler-flannel-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:00.620423 408290 pod_ready.go:39] duration metric: took 18.811740813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:52:00.620442 408290 api_server.go:52] waiting for apiserver process to appear ...
I0127 11:52:00.620509 408290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:52:00.636203 408290 api_server.go:72] duration metric: took 26.524075024s to wait for apiserver process to appear ...
I0127 11:52:00.636225 408290 api_server.go:88] waiting for apiserver healthz status ...
I0127 11:52:00.636241 408290 api_server.go:253] Checking apiserver healthz at https://192.168.50.249:8443/healthz ...
I0127 11:52:00.640488 408290 api_server.go:279] https://192.168.50.249:8443/healthz returned 200:
ok
I0127 11:52:00.641304 408290 api_server.go:141] control plane version: v1.32.1
I0127 11:52:00.641328 408290 api_server.go:131] duration metric: took 5.095135ms to wait for apiserver health ...
I0127 11:52:00.641338 408290 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 11:52:00.823404 408290 system_pods.go:59] 8 kube-system pods found
I0127 11:52:00.823440 408290 system_pods.go:61] "coredns-668d6bf9bc-cxhgb" [1b5c455f-cd3e-4049-ad66-0b5ac83e0cfc] Running
I0127 11:52:00.823447 408290 system_pods.go:61] "coredns-668d6bf9bc-x26ng" [faddde6c-95bb-43ed-8312-9cb6d1381b76] Running
I0127 11:52:00.823451 408290 system_pods.go:61] "etcd-flannel-230154" [04cfa9e0-f3d2-4147-a565-73d9a56314be] Running
I0127 11:52:00.823457 408290 system_pods.go:61] "kube-apiserver-flannel-230154" [b7e45b11-41e6-4471-b69f-ebcfa9fe0b11] Running
I0127 11:52:00.823460 408290 system_pods.go:61] "kube-controller-manager-flannel-230154" [db9c61ca-4433-474f-b896-bf75b5586aa8] Running
I0127 11:52:00.823464 408290 system_pods.go:61] "kube-proxy-fwvhb" [c9df58ca-9fda-4b0d-83d3-b0d5771a2b8d] Running
I0127 11:52:00.823468 408290 system_pods.go:61] "kube-scheduler-flannel-230154" [ef963048-9064-4a1b-8c7c-0b560ac1073e] Running
I0127 11:52:00.823473 408290 system_pods.go:61] "storage-provisioner" [1d37e577-26fc-4920-addd-4c2b9ea83d4f] Running
I0127 11:52:00.823480 408290 system_pods.go:74] duration metric: took 182.135829ms to wait for pod list to return data ...
I0127 11:52:00.823492 408290 default_sa.go:34] waiting for default service account to be created ...
I0127 11:52:01.019648 408290 default_sa.go:45] found service account: "default"
I0127 11:52:01.019672 408290 default_sa.go:55] duration metric: took 196.17422ms for default service account to be created ...
I0127 11:52:01.019680 408290 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 11:52:01.222213 408290 system_pods.go:87] 8 kube-system pods found
I0127 11:52:05.478491 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:05.479050 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find current IP address of domain bridge-230154 in network mk-bridge-230154
I0127 11:52:05.479075 410030 main.go:141] libmachine: (bridge-230154) DBG | I0127 11:52:05.479016 410053 retry.go:31] will retry after 3.983085371s: waiting for domain to come up
I0127 11:52:09.463887 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:09.464547 410030 main.go:141] libmachine: (bridge-230154) found domain IP: 192.168.61.114
I0127 11:52:09.464572 410030 main.go:141] libmachine: (bridge-230154) reserving static IP address...
I0127 11:52:09.464581 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has current primary IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:09.464980 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "bridge-230154", mac: "52:54:00:79:3a:f7", ip: "192.168.61.114"} in network mk-bridge-230154
I0127 11:52:09.541183 410030 main.go:141] libmachine: (bridge-230154) reserved static IP address 192.168.61.114 for domain bridge-230154
I0127 11:52:09.541215 410030 main.go:141] libmachine: (bridge-230154) waiting for SSH...
I0127 11:52:09.541226 410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
I0127 11:52:09.544735 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:09.545125 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154
I0127 11:52:09.545156 410030 main.go:141] libmachine: (bridge-230154) DBG | unable to find defined IP address of network mk-bridge-230154 interface with MAC address 52:54:00:79:3a:f7
I0127 11:52:09.545335 410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
I0127 11:52:09.545351 410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
I0127 11:52:09.545396 410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 11:52:09.545409 410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
I0127 11:52:09.545431 410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
I0127 11:52:09.549092 410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: exit status 255:
I0127 11:52:09.549118 410030 main.go:141] libmachine: (bridge-230154) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0127 11:52:09.549128 410030 main.go:141] libmachine: (bridge-230154) DBG | command : exit 0
I0127 11:52:09.549141 410030 main.go:141] libmachine: (bridge-230154) DBG | err : exit status 255
I0127 11:52:09.549152 410030 main.go:141] libmachine: (bridge-230154) DBG | output :
I0127 11:52:12.550382 410030 main.go:141] libmachine: (bridge-230154) DBG | Getting to WaitForSSH function...
I0127 11:52:12.552791 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.553322 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:12.553351 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.553432 410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH client type: external
I0127 11:52:12.553481 410030 main.go:141] libmachine: (bridge-230154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa (-rw-------)
I0127 11:52:12.553525 410030 main.go:141] libmachine: (bridge-230154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 11:52:12.553539 410030 main.go:141] libmachine: (bridge-230154) DBG | About to run SSH command:
I0127 11:52:12.553563 410030 main.go:141] libmachine: (bridge-230154) DBG | exit 0
I0127 11:52:12.681782 410030 main.go:141] libmachine: (bridge-230154) DBG | SSH cmd err, output: <nil>:
I0127 11:52:12.682047 410030 main.go:141] libmachine: (bridge-230154) KVM machine creation complete
I0127 11:52:12.682445 410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
I0127 11:52:12.682967 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:12.683184 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:12.683394 410030 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0127 11:52:12.683415 410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
I0127 11:52:12.684785 410030 main.go:141] libmachine: Detecting operating system of created instance...
I0127 11:52:12.684823 410030 main.go:141] libmachine: Waiting for SSH to be available...
I0127 11:52:12.684832 410030 main.go:141] libmachine: Getting to WaitForSSH function...
I0127 11:52:12.684844 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:12.687551 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.687960 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:12.687997 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.688103 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:12.688306 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.688464 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.688609 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:12.688818 410030 main.go:141] libmachine: Using SSH client type: native
I0127 11:52:12.689070 410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.114 22 <nil> <nil>}
I0127 11:52:12.689084 410030 main.go:141] libmachine: About to run SSH command:
exit 0
I0127 11:52:12.800827 410030 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 11:52:12.800849 410030 main.go:141] libmachine: Detecting the provisioner...
I0127 11:52:12.800859 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:12.803312 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.803747 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:12.803778 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.803968 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:12.804181 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.804339 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.804499 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:12.804712 410030 main.go:141] libmachine: Using SSH client type: native
I0127 11:52:12.804930 410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.114 22 <nil> <nil>}
I0127 11:52:12.804944 410030 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0127 11:52:12.922388 410030 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0127 11:52:12.922499 410030 main.go:141] libmachine: found compatible host: buildroot
I0127 11:52:12.922517 410030 main.go:141] libmachine: Provisioning with buildroot...
I0127 11:52:12.922528 410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
I0127 11:52:12.922767 410030 buildroot.go:166] provisioning hostname "bridge-230154"
I0127 11:52:12.922793 410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
I0127 11:52:12.922988 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:12.925557 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.925920 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:12.925951 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:12.926089 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:12.926266 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.926402 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:12.926527 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:12.926642 410030 main.go:141] libmachine: Using SSH client type: native
I0127 11:52:12.926867 410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.114 22 <nil> <nil>}
I0127 11:52:12.926884 410030 main.go:141] libmachine: About to run SSH command:
sudo hostname bridge-230154 && echo "bridge-230154" | sudo tee /etc/hostname
I0127 11:52:13.055349 410030 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-230154
I0127 11:52:13.055376 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.057804 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.058160 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.058184 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.058377 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.058583 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.058746 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.058898 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.059086 410030 main.go:141] libmachine: Using SSH client type: native
I0127 11:52:13.059305 410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.114 22 <nil> <nil>}
I0127 11:52:13.059340 410030 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sbridge-230154' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-230154/g' /etc/hosts;
else
echo '127.0.1.1 bridge-230154' | sudo tee -a /etc/hosts;
fi
fi
I0127 11:52:13.182533 410030 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 11:52:13.182574 410030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20319-348858/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-348858/.minikube}
I0127 11:52:13.182607 410030 buildroot.go:174] setting up certificates
I0127 11:52:13.182618 410030 provision.go:84] configureAuth start
I0127 11:52:13.182631 410030 main.go:141] libmachine: (bridge-230154) Calling .GetMachineName
I0127 11:52:13.182846 410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
I0127 11:52:13.185388 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.185727 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.185753 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.185888 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.188052 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.188418 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.188451 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.188586 410030 provision.go:143] copyHostCerts
I0127 11:52:13.188644 410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem, removing ...
I0127 11:52:13.188668 410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem
I0127 11:52:13.188770 410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/ca.pem (1082 bytes)
I0127 11:52:13.188901 410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem, removing ...
I0127 11:52:13.188912 410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem
I0127 11:52:13.188951 410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/cert.pem (1123 bytes)
I0127 11:52:13.189068 410030 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem, removing ...
I0127 11:52:13.189080 410030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem
I0127 11:52:13.189133 410030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-348858/.minikube/key.pem (1679 bytes)
I0127 11:52:13.189206 410030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem org=jenkins.bridge-230154 san=[127.0.0.1 192.168.61.114 bridge-230154 localhost minikube]
I0127 11:52:13.437569 410030 provision.go:177] copyRemoteCerts
I0127 11:52:13.437657 410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 11:52:13.437681 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.440100 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.440463 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.440498 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.440655 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.440869 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.441020 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.441174 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:13.527720 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 11:52:13.553220 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 11:52:13.577811 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0127 11:52:13.602562 410030 provision.go:87] duration metric: took 419.926949ms to configureAuth
I0127 11:52:13.602597 410030 buildroot.go:189] setting minikube options for container-runtime
I0127 11:52:13.602829 410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:52:13.602905 410030 main.go:141] libmachine: Checking connection to Docker...
I0127 11:52:13.602923 410030 main.go:141] libmachine: (bridge-230154) Calling .GetURL
I0127 11:52:13.604054 410030 main.go:141] libmachine: (bridge-230154) DBG | using libvirt version 6000000
I0127 11:52:13.606405 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.606734 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.606760 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.606925 410030 main.go:141] libmachine: Docker is up and running!
I0127 11:52:13.606940 410030 main.go:141] libmachine: Reticulating splines...
I0127 11:52:13.606947 410030 client.go:171] duration metric: took 25.909938238s to LocalClient.Create
I0127 11:52:13.606968 410030 start.go:167] duration metric: took 25.909999682s to libmachine.API.Create "bridge-230154"
I0127 11:52:13.606981 410030 start.go:293] postStartSetup for "bridge-230154" (driver="kvm2")
I0127 11:52:13.606995 410030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 11:52:13.607018 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:13.607273 410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 11:52:13.607302 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.609569 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.609936 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.609966 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.610158 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.610355 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.610531 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.610640 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:13.697284 410030 ssh_runner.go:195] Run: cat /etc/os-release
I0127 11:52:13.702294 410030 info.go:137] Remote host: Buildroot 2023.02.9
I0127 11:52:13.702320 410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/addons for local assets ...
I0127 11:52:13.702383 410030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-348858/.minikube/files for local assets ...
I0127 11:52:13.702495 410030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem -> 3562042.pem in /etc/ssl/certs
I0127 11:52:13.702595 410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 11:52:13.713272 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /etc/ssl/certs/3562042.pem (1708 bytes)
I0127 11:52:13.737044 410030 start.go:296] duration metric: took 130.0485ms for postStartSetup
I0127 11:52:13.737087 410030 main.go:141] libmachine: (bridge-230154) Calling .GetConfigRaw
I0127 11:52:13.737687 410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
I0127 11:52:13.740135 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.740568 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.740596 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.740857 410030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/config.json ...
I0127 11:52:13.741063 410030 start.go:128] duration metric: took 26.061575251s to createHost
I0127 11:52:13.741091 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.743565 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.743863 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.743892 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.744009 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.744178 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.744308 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.744464 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.744612 410030 main.go:141] libmachine: Using SSH client type: native
I0127 11:52:13.744775 410030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.114 22 <nil> <nil>}
I0127 11:52:13.744786 410030 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 11:52:13.858058 410030 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737978733.835977728
I0127 11:52:13.858081 410030 fix.go:216] guest clock: 1737978733.835977728
I0127 11:52:13.858090 410030 fix.go:229] Guest: 2025-01-27 11:52:13.835977728 +0000 UTC Remote: 2025-01-27 11:52:13.74107788 +0000 UTC m=+26.172194908 (delta=94.899848ms)
I0127 11:52:13.858112 410030 fix.go:200] guest clock delta is within tolerance: 94.899848ms
I0127 11:52:13.858119 410030 start.go:83] releasing machines lock for "bridge-230154", held for 26.178731868s
I0127 11:52:13.858143 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:13.858357 410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
I0127 11:52:13.860564 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.860972 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.861005 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.861149 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:13.861700 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:13.861894 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:13.861978 410030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 11:52:13.862037 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.862113 410030 ssh_runner.go:195] Run: cat /version.json
I0127 11:52:13.862141 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:13.864536 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.864853 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.864880 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.864898 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.865008 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.865191 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.865337 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.865370 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:13.865394 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:13.865518 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:13.865598 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:13.865728 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:13.865888 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:13.866057 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:13.965402 410030 ssh_runner.go:195] Run: systemctl --version
I0127 11:52:13.971806 410030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 11:52:13.977779 410030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 11:52:13.977840 410030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 11:52:13.994427 410030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 11:52:13.994450 410030 start.go:495] detecting cgroup driver to use...
I0127 11:52:13.994511 410030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 11:52:14.024064 410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 11:52:14.037402 410030 docker.go:217] disabling cri-docker service (if available) ...
I0127 11:52:14.037442 410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 11:52:14.051360 410030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 11:52:14.064833 410030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 11:52:14.189820 410030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 11:52:14.353457 410030 docker.go:233] disabling docker service ...
I0127 11:52:14.353523 410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 11:52:14.368733 410030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 11:52:14.383491 410030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 11:52:14.519252 410030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 11:52:14.653505 410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 11:52:14.667113 410030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 11:52:14.686409 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 11:52:14.698227 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 11:52:14.708812 410030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 11:52:14.708860 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 11:52:14.719554 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 11:52:14.729838 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 11:52:14.740183 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 11:52:14.750883 410030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 11:52:14.761217 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 11:52:14.771423 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 11:52:14.781773 410030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 11:52:14.793278 410030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 11:52:14.804439 410030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 11:52:14.804483 410030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 11:52:14.818950 410030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 11:52:14.829832 410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:52:14.959488 410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 11:52:14.989337 410030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 11:52:14.989418 410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 11:52:14.994828 410030 retry.go:31] will retry after 1.345888224s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 11:52:16.341324 410030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 11:52:16.347230 410030 start.go:563] Will wait 60s for crictl version
I0127 11:52:16.347291 410030 ssh_runner.go:195] Run: which crictl
I0127 11:52:16.351193 410030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 11:52:16.395528 410030 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 11:52:16.395651 410030 ssh_runner.go:195] Run: containerd --version
I0127 11:52:16.423238 410030 ssh_runner.go:195] Run: containerd --version
I0127 11:52:16.449514 410030 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 11:52:16.450520 410030 main.go:141] libmachine: (bridge-230154) Calling .GetIP
I0127 11:52:16.453118 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:16.453477 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:16.453507 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:16.453734 410030 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0127 11:52:16.458237 410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 11:52:16.472482 410030 kubeadm.go:883] updating cluster {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 11:52:16.472594 410030 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:52:16.472646 410030 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:52:16.504936 410030 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
I0127 11:52:16.504987 410030 ssh_runner.go:195] Run: which lz4
I0127 11:52:16.509417 410030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0127 11:52:16.514081 410030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0127 11:52:16.514116 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
I0127 11:52:18.011626 410030 containerd.go:563] duration metric: took 1.502237089s to copy over tarball
I0127 11:52:18.011722 410030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0127 11:52:20.285505 410030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273743353s)
I0127 11:52:20.285572 410030 containerd.go:570] duration metric: took 2.273906638s to extract the tarball
I0127 11:52:20.285607 410030 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0127 11:52:20.324554 410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:52:20.445111 410030 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 11:52:20.473323 410030 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:52:20.503997 410030 retry.go:31] will retry after 167.428638ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T11:52:20Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0127 11:52:20.672333 410030 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 11:52:20.709952 410030 containerd.go:627] all images are preloaded for containerd runtime.
I0127 11:52:20.709981 410030 cache_images.go:84] Images are preloaded, skipping loading
I0127 11:52:20.709993 410030 kubeadm.go:934] updating node { 192.168.61.114 8443 v1.32.1 containerd true true} ...
I0127 11:52:20.710125 410030 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-230154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.114
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
I0127 11:52:20.710197 410030 ssh_runner.go:195] Run: sudo crictl info
I0127 11:52:20.744967 410030 cni.go:84] Creating CNI manager for "bridge"
I0127 11:52:20.744998 410030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 11:52:20.745028 410030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.114 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-230154 NodeName:bridge-230154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 11:52:20.745188 410030 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.114
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "bridge-230154"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.61.114"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.114"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 11:52:20.745251 410030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 11:52:20.756008 410030 binaries.go:44] Found k8s binaries, skipping transfer
I0127 11:52:20.756057 410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 11:52:20.765655 410030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
I0127 11:52:20.782155 410030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 11:52:20.798911 410030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
I0127 11:52:20.816745 410030 ssh_runner.go:195] Run: grep 192.168.61.114 control-plane.minikube.internal$ /etc/hosts
I0127 11:52:20.820748 410030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.114 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 11:52:20.833862 410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:52:20.953656 410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 11:52:20.974846 410030 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154 for IP: 192.168.61.114
I0127 11:52:20.974871 410030 certs.go:194] generating shared ca certs ...
I0127 11:52:20.974892 410030 certs.go:226] acquiring lock for ca certs: {Name:mkd458666dacb6826c0d92f860c3c2133957f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:20.975122 410030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key
I0127 11:52:20.975196 410030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key
I0127 11:52:20.975212 410030 certs.go:256] generating profile certs ...
I0127 11:52:20.975305 410030 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key
I0127 11:52:20.975335 410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt with IP's: []
I0127 11:52:21.301307 410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt ...
I0127 11:52:21.301335 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.crt: {Name:mk56bf4c2bbecfad8654b1b4ec642ad6fec51061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.301487 410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key ...
I0127 11:52:21.301498 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/client.key: {Name:mk552257e0fe7fe2855b6465ed9cf6fdbde292fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.301600 410030 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a
I0127 11:52:21.301615 410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.114]
I0127 11:52:21.347405 410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a ...
I0127 11:52:21.347434 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a: {Name:mk6a6599e29481626e185ed34dee333ec39afdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.347596 410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a ...
I0127 11:52:21.347613 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a: {Name:mk7efccd9616f59b687d73eb0de97063b6b07fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.347712 410030 certs.go:381] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt
I0127 11:52:21.347813 410030 certs.go:385] copying /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key.efd0145a -> /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key
I0127 11:52:21.347892 410030 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key
I0127 11:52:21.347914 410030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt with IP's: []
I0127 11:52:21.603596 410030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt ...
I0127 11:52:21.603626 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt: {Name:mk62ae8cb0440216cba0e9b53bb75a82eea68d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.603813 410030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key ...
I0127 11:52:21.603851 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key: {Name:mk874150a052e7bf16d1760bcb83588a7d7232ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:21.604047 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem (1338 bytes)
W0127 11:52:21.604084 410030 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204_empty.pem, impossibly tiny 0 bytes
I0127 11:52:21.604094 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca-key.pem (1675 bytes)
I0127 11:52:21.604127 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/ca.pem (1082 bytes)
I0127 11:52:21.604150 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/cert.pem (1123 bytes)
I0127 11:52:21.604173 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/certs/key.pem (1679 bytes)
I0127 11:52:21.604208 410030 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem (1708 bytes)
I0127 11:52:21.604922 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 11:52:21.640478 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 11:52:21.675198 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 11:52:21.707991 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 11:52:21.734067 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0127 11:52:21.758859 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 11:52:21.785069 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 11:52:21.811694 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/profiles/bridge-230154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 11:52:21.839559 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 11:52:21.864922 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/certs/356204.pem --> /usr/share/ca-certificates/356204.pem (1338 bytes)
I0127 11:52:21.893151 410030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-348858/.minikube/files/etc/ssl/certs/3562042.pem --> /usr/share/ca-certificates/3562042.pem (1708 bytes)
I0127 11:52:21.918761 410030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 11:52:21.936954 410030 ssh_runner.go:195] Run: openssl version
I0127 11:52:21.943412 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356204.pem && ln -fs /usr/share/ca-certificates/356204.pem /etc/ssl/certs/356204.pem"
I0127 11:52:21.953934 410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356204.pem
I0127 11:52:21.958381 410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:40 /usr/share/ca-certificates/356204.pem
I0127 11:52:21.958435 410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356204.pem
I0127 11:52:21.964735 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356204.pem /etc/ssl/certs/51391683.0"
I0127 11:52:21.976503 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3562042.pem && ln -fs /usr/share/ca-certificates/3562042.pem /etc/ssl/certs/3562042.pem"
I0127 11:52:21.987257 410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3562042.pem
I0127 11:52:21.993575 410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:40 /usr/share/ca-certificates/3562042.pem
I0127 11:52:21.993646 410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3562042.pem
I0127 11:52:21.999525 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3562042.pem /etc/ssl/certs/3ec20f2e.0"
I0127 11:52:22.009959 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 11:52:22.021429 410030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 11:52:22.026427 410030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:32 /usr/share/ca-certificates/minikubeCA.pem
I0127 11:52:22.026475 410030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 11:52:22.032448 410030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 11:52:22.043143 410030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 11:52:22.047488 410030 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0127 11:52:22.047543 410030 kubeadm.go:392] StartCluster: {Name:bridge-230154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-230154 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 11:52:22.047613 410030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 11:52:22.047658 410030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 11:52:22.086372 410030 cri.go:89] found id: ""
I0127 11:52:22.086433 410030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 11:52:22.096728 410030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 11:52:22.106517 410030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 11:52:22.116214 410030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 11:52:22.116231 410030 kubeadm.go:157] found existing configuration files:
I0127 11:52:22.116264 410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 11:52:22.125344 410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 11:52:22.125413 410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 11:52:22.134811 410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 11:52:22.143836 410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 11:52:22.143877 410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 11:52:22.153251 410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 11:52:22.161993 410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 11:52:22.162078 410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 11:52:22.171015 410030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 11:52:22.179758 410030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 11:52:22.179812 410030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 11:52:22.189014 410030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 11:52:22.375345 410030 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 11:52:32.209450 410030 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 11:52:32.209522 410030 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 11:52:32.209617 410030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 11:52:32.209722 410030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 11:52:32.209830 410030 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 11:52:32.209885 410030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 11:52:32.211330 410030 out.go:235] - Generating certificates and keys ...
I0127 11:52:32.211448 410030 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 11:52:32.211535 410030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 11:52:32.211635 410030 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0127 11:52:32.211700 410030 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0127 11:52:32.211752 410030 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0127 11:52:32.211795 410030 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0127 11:52:32.211845 410030 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0127 11:52:32.211948 410030 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
I0127 11:52:32.211995 410030 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0127 11:52:32.212189 410030 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-230154 localhost] and IPs [192.168.61.114 127.0.0.1 ::1]
I0127 11:52:32.212294 410030 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0127 11:52:32.212377 410030 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0127 11:52:32.212435 410030 kubeadm.go:310] [certs] Generating "sa" key and public key
I0127 11:52:32.212524 410030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 11:52:32.212592 410030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 11:52:32.212643 410030 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 11:52:32.212692 410030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 11:52:32.212798 410030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 11:52:32.212898 410030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 11:52:32.212993 410030 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 11:52:32.213052 410030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 11:52:32.214270 410030 out.go:235] - Booting up control plane ...
I0127 11:52:32.214386 410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 11:52:32.214498 410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 11:52:32.214590 410030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 11:52:32.214739 410030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 11:52:32.214899 410030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 11:52:32.214967 410030 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 11:52:32.215138 410030 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 11:52:32.215293 410030 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 11:52:32.215402 410030 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001079301s
I0127 11:52:32.215488 410030 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 11:52:32.215548 410030 kubeadm.go:310] [api-check] The API server is healthy after 4.502067696s
I0127 11:52:32.215682 410030 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 11:52:32.215799 410030 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 11:52:32.215885 410030 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 11:52:32.216101 410030 kubeadm.go:310] [mark-control-plane] Marking the node bridge-230154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 11:52:32.216183 410030 kubeadm.go:310] [bootstrap-token] Using token: 3ugidl.t0qx3cfrqpz3s5rm
I0127 11:52:32.218040 410030 out.go:235] - Configuring RBAC rules ...
I0127 11:52:32.218199 410030 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 11:52:32.218297 410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 11:52:32.218438 410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 11:52:32.218656 410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 11:52:32.218778 410030 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 11:52:32.218872 410030 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 11:52:32.219002 410030 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 11:52:32.219065 410030 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 11:52:32.219138 410030 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 11:52:32.219147 410030 kubeadm.go:310]
I0127 11:52:32.219229 410030 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 11:52:32.219238 410030 kubeadm.go:310]
I0127 11:52:32.219362 410030 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 11:52:32.219371 410030 kubeadm.go:310]
I0127 11:52:32.219407 410030 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 11:52:32.219511 410030 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 11:52:32.219596 410030 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 11:52:32.219609 410030 kubeadm.go:310]
I0127 11:52:32.219697 410030 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 11:52:32.219711 410030 kubeadm.go:310]
I0127 11:52:32.219782 410030 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 11:52:32.219793 410030 kubeadm.go:310]
I0127 11:52:32.219869 410030 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 11:52:32.219979 410030 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 11:52:32.220072 410030 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 11:52:32.220081 410030 kubeadm.go:310]
I0127 11:52:32.220215 410030 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 11:52:32.220347 410030 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 11:52:32.220359 410030 kubeadm.go:310]
I0127 11:52:32.220497 410030 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
I0127 11:52:32.220638 410030 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982 \
I0127 11:52:32.220670 410030 kubeadm.go:310] --control-plane
I0127 11:52:32.220679 410030 kubeadm.go:310]
I0127 11:52:32.220787 410030 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 11:52:32.220796 410030 kubeadm.go:310]
I0127 11:52:32.220902 410030 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3ugidl.t0qx3cfrqpz3s5rm \
I0127 11:52:32.221064 410030 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c769a71fa2072963699012a67c9bb4b27b6fc88b52aea51191b7b2189ca81982
I0127 11:52:32.221079 410030 cni.go:84] Creating CNI manager for "bridge"
I0127 11:52:32.222330 410030 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 11:52:32.223261 410030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 11:52:32.235254 410030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 11:52:32.261938 410030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 11:52:32.262064 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:32.262145 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-230154 minikube.k8s.io/updated_at=2025_01_27T11_52_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=bridge-230154 minikube.k8s.io/primary=true
I0127 11:52:32.280765 410030 ops.go:34] apiserver oom_adj: -16
I0127 11:52:32.416195 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:32.916850 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:33.416903 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:33.916419 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:34.417254 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:34.916570 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:35.416622 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:35.916814 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:36.417150 410030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 11:52:36.512250 410030 kubeadm.go:1113] duration metric: took 4.250259054s to wait for elevateKubeSystemPrivileges
I0127 11:52:36.512301 410030 kubeadm.go:394] duration metric: took 14.46476068s to StartCluster
I0127 11:52:36.512331 410030 settings.go:142] acquiring lock: {Name:mkb277d193c8888d23a77778c65f322a69e59091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:36.512467 410030 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20319-348858/kubeconfig
I0127 11:52:36.516653 410030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-348858/kubeconfig: {Name:mk12891275228a2835a35659c2ede45028f0a576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 11:52:36.516976 410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 11:52:36.516972 410030 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.114 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 11:52:36.517077 410030 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 11:52:36.517203 410030 addons.go:69] Setting storage-provisioner=true in profile "bridge-230154"
I0127 11:52:36.517227 410030 addons.go:238] Setting addon storage-provisioner=true in "bridge-230154"
I0127 11:52:36.517240 410030 config.go:182] Loaded profile config "bridge-230154": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:52:36.517270 410030 host.go:66] Checking if "bridge-230154" exists ...
I0127 11:52:36.517307 410030 addons.go:69] Setting default-storageclass=true in profile "bridge-230154"
I0127 11:52:36.517328 410030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-230154"
I0127 11:52:36.517801 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:52:36.517819 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:52:36.517855 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:52:36.517860 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:52:36.519326 410030 out.go:177] * Verifying Kubernetes components...
I0127 11:52:36.520466 410030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 11:52:36.537759 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
I0127 11:52:36.538308 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:52:36.538532 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
I0127 11:52:36.538955 410030 main.go:141] libmachine: Using API Version 1
I0127 11:52:36.538984 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:52:36.539060 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:52:36.539411 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:52:36.539558 410030 main.go:141] libmachine: Using API Version 1
I0127 11:52:36.539581 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:52:36.539945 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:52:36.539986 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:52:36.540037 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:52:36.540303 410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
I0127 11:52:36.543982 410030 addons.go:238] Setting addon default-storageclass=true in "bridge-230154"
I0127 11:52:36.544027 410030 host.go:66] Checking if "bridge-230154" exists ...
I0127 11:52:36.544408 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:52:36.544452 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:52:36.557799 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
I0127 11:52:36.558329 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:52:36.558879 410030 main.go:141] libmachine: Using API Version 1
I0127 11:52:36.558897 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:52:36.559224 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:52:36.559412 410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
I0127 11:52:36.559996 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
I0127 11:52:36.560556 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:52:36.561039 410030 main.go:141] libmachine: Using API Version 1
I0127 11:52:36.561051 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:52:36.561110 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:36.561469 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:52:36.561948 410030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 11:52:36.561991 410030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 11:52:36.562672 410030 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 11:52:36.563764 410030 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 11:52:36.563778 410030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 11:52:36.563793 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:36.567499 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:36.568057 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:36.568077 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:36.568247 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:36.568401 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:36.568577 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:36.568732 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:36.577540 410030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
I0127 11:52:36.578011 410030 main.go:141] libmachine: () Calling .GetVersion
I0127 11:52:36.578548 410030 main.go:141] libmachine: Using API Version 1
I0127 11:52:36.578571 410030 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 11:52:36.578891 410030 main.go:141] libmachine: () Calling .GetMachineName
I0127 11:52:36.579083 410030 main.go:141] libmachine: (bridge-230154) Calling .GetState
I0127 11:52:36.580470 410030 main.go:141] libmachine: (bridge-230154) Calling .DriverName
I0127 11:52:36.580638 410030 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 11:52:36.580655 410030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 11:52:36.580682 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHHostname
I0127 11:52:36.583026 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:36.583362 410030 main.go:141] libmachine: (bridge-230154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:3a:f7", ip: ""} in network mk-bridge-230154: {Iface:virbr3 ExpiryTime:2025-01-27 12:52:02 +0000 UTC Type:0 Mac:52:54:00:79:3a:f7 Iaid: IPaddr:192.168.61.114 Prefix:24 Hostname:bridge-230154 Clientid:01:52:54:00:79:3a:f7}
I0127 11:52:36.583391 410030 main.go:141] libmachine: (bridge-230154) DBG | domain bridge-230154 has defined IP address 192.168.61.114 and MAC address 52:54:00:79:3a:f7 in network mk-bridge-230154
I0127 11:52:36.583573 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHPort
I0127 11:52:36.583748 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHKeyPath
I0127 11:52:36.583875 410030 main.go:141] libmachine: (bridge-230154) Calling .GetSSHUsername
I0127 11:52:36.584004 410030 sshutil.go:53] new ssh client: &{IP:192.168.61.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20319-348858/.minikube/machines/bridge-230154/id_rsa Username:docker}
I0127 11:52:36.919631 410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 11:52:36.921628 410030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 11:52:36.921644 410030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.61.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0127 11:52:36.988242 410030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 11:52:38.185164 410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.265497157s)
I0127 11:52:38.185231 410030 main.go:141] libmachine: Making call to close driver server
I0127 11:52:38.185230 410030 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.263561786s)
I0127 11:52:38.185246 410030 main.go:141] libmachine: (bridge-230154) Calling .Close
I0127 11:52:38.185289 410030 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.61.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.263612805s)
I0127 11:52:38.185330 410030 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
I0127 11:52:38.185372 410030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.197100952s)
I0127 11:52:38.185399 410030 main.go:141] libmachine: Making call to close driver server
I0127 11:52:38.185427 410030 main.go:141] libmachine: (bridge-230154) Calling .Close
I0127 11:52:38.185562 410030 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:52:38.185597 410030 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:52:38.185609 410030 main.go:141] libmachine: Making call to close driver server
I0127 11:52:38.185616 410030 main.go:141] libmachine: (bridge-230154) Calling .Close
I0127 11:52:38.185828 410030 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:52:38.185852 410030 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:52:38.185862 410030 main.go:141] libmachine: Making call to close driver server
I0127 11:52:38.185868 410030 main.go:141] libmachine: (bridge-230154) Calling .Close
I0127 11:52:38.186004 410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
I0127 11:52:38.186048 410030 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:52:38.186069 410030 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:52:38.186075 410030 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:52:38.186079 410030 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:52:38.187011 410030 node_ready.go:35] waiting up to 15m0s for node "bridge-230154" to be "Ready" ...
I0127 11:52:38.212873 410030 node_ready.go:49] node "bridge-230154" has status "Ready":"True"
I0127 11:52:38.212905 410030 node_ready.go:38] duration metric: took 25.865633ms for node "bridge-230154" to be "Ready" ...
I0127 11:52:38.212917 410030 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:52:38.214274 410030 main.go:141] libmachine: Making call to close driver server
I0127 11:52:38.214298 410030 main.go:141] libmachine: (bridge-230154) Calling .Close
I0127 11:52:38.214581 410030 main.go:141] libmachine: Successfully made call to close driver server
I0127 11:52:38.214630 410030 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 11:52:38.214612 410030 main.go:141] libmachine: (bridge-230154) DBG | Closing plugin on server side
I0127 11:52:38.216008 410030 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0127 11:52:38.216924 410030 addons.go:514] duration metric: took 1.699865075s for enable addons: enabled=[storage-provisioner default-storageclass]
I0127 11:52:38.224349 410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
I0127 11:52:38.695217 410030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-230154" context rescaled to 1 replicas
I0127 11:52:40.231472 410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
I0127 11:52:42.732355 410030 pod_ready.go:103] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status "Ready":"False"
I0127 11:52:44.230143 410030 pod_ready.go:98] pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.114 HostIPs:[{IP:192.168.61
.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadO
nly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0127 11:52:44.230174 410030 pod_ready.go:82] duration metric: took 6.00579922s for pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace to be "Ready" ...
E0127 11:52:44.230189 410030 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-4c298" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:44 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-01-27 11:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.114 HostIPs:[{IP:192.168.61.114}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-01-27 11:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-01-27 11:52:37 +0000 UTC,FinishedAt:2025-01-27 11:52:43 +0000 UTC,ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:containerd://c98c745d4f9edf1ff917ee47655ca1208c7e4b09a4743c10c5415ed7b2fec8bd Started:0xc001b44fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c1e130} {Name:kube-api-access-flxzd MountPath:/var/run/secrets/kuber
netes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001c1e140}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0127 11:52:44.230202 410030 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
I0127 11:52:44.234796 410030 pod_ready.go:93] pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:44.234815 410030 pod_ready.go:82] duration metric: took 4.604397ms for pod "coredns-668d6bf9bc-pc8xl" in "kube-system" namespace to be "Ready" ...
I0127 11:52:44.234823 410030 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:44.238759 410030 pod_ready.go:93] pod "etcd-bridge-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:44.238775 410030 pod_ready.go:82] duration metric: took 3.947094ms for pod "etcd-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:44.238782 410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.244732 410030 pod_ready.go:93] pod "kube-apiserver-bridge-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:45.244763 410030 pod_ready.go:82] duration metric: took 1.00597309s for pod "kube-apiserver-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.244778 410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.249321 410030 pod_ready.go:93] pod "kube-controller-manager-bridge-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:45.249342 410030 pod_ready.go:82] duration metric: took 4.554992ms for pod "kube-controller-manager-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.249355 410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.428257 410030 pod_ready.go:93] pod "kube-proxy-5xb8t" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:45.428277 410030 pod_ready.go:82] duration metric: took 178.914707ms for pod "kube-proxy-5xb8t" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.428285 410030 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.829776 410030 pod_ready.go:93] pod "kube-scheduler-bridge-230154" in "kube-system" namespace has status "Ready":"True"
I0127 11:52:45.829809 410030 pod_ready.go:82] duration metric: took 401.516042ms for pod "kube-scheduler-bridge-230154" in "kube-system" namespace to be "Ready" ...
I0127 11:52:45.829824 410030 pod_ready.go:39] duration metric: took 7.616894592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 11:52:45.829844 410030 api_server.go:52] waiting for apiserver process to appear ...
I0127 11:52:45.829909 410030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 11:52:45.845203 410030 api_server.go:72] duration metric: took 9.328191567s to wait for apiserver process to appear ...
I0127 11:52:45.845230 410030 api_server.go:88] waiting for apiserver healthz status ...
I0127 11:52:45.845249 410030 api_server.go:253] Checking apiserver healthz at https://192.168.61.114:8443/healthz ...
I0127 11:52:45.849548 410030 api_server.go:279] https://192.168.61.114:8443/healthz returned 200:
ok
I0127 11:52:45.850315 410030 api_server.go:141] control plane version: v1.32.1
I0127 11:52:45.850339 410030 api_server.go:131] duration metric: took 5.10115ms to wait for apiserver health ...
I0127 11:52:45.850346 410030 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 11:52:46.030070 410030 system_pods.go:59] 7 kube-system pods found
I0127 11:52:46.030111 410030 system_pods.go:61] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
I0127 11:52:46.030120 410030 system_pods.go:61] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
I0127 11:52:46.030127 410030 system_pods.go:61] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
I0127 11:52:46.030142 410030 system_pods.go:61] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
I0127 11:52:46.030149 410030 system_pods.go:61] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
I0127 11:52:46.030159 410030 system_pods.go:61] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
I0127 11:52:46.030169 410030 system_pods.go:61] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
I0127 11:52:46.030181 410030 system_pods.go:74] duration metric: took 179.827627ms to wait for pod list to return data ...
I0127 11:52:46.030196 410030 default_sa.go:34] waiting for default service account to be created ...
I0127 11:52:46.228329 410030 default_sa.go:45] found service account: "default"
I0127 11:52:46.228364 410030 default_sa.go:55] duration metric: took 198.158482ms for default service account to be created ...
I0127 11:52:46.228375 410030 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 11:52:46.430997 410030 system_pods.go:87] 7 kube-system pods found
I0127 11:52:46.630596 410030 system_pods.go:105] "coredns-668d6bf9bc-pc8xl" [45ae809c-52a6-4405-8382-d79f3d6b3e58] Running
I0127 11:52:46.630617 410030 system_pods.go:105] "etcd-bridge-230154" [64ffad49-a1cc-4273-a76f-27829ec98715] Running
I0127 11:52:46.630623 410030 system_pods.go:105] "kube-apiserver-bridge-230154" [ef8b8909-ce47-4280-a8ad-1c3dcd14e862] Running
I0127 11:52:46.630628 410030 system_pods.go:105] "kube-controller-manager-bridge-230154" [c8aff057-390a-474d-9436-1bdcc79bd8de] Running
I0127 11:52:46.630632 410030 system_pods.go:105] "kube-proxy-5xb8t" [bf62bfa5-b098-442e-b13c-2a041c874c50] Running
I0127 11:52:46.630636 410030 system_pods.go:105] "kube-scheduler-bridge-230154" [da73974e-b55e-400b-a078-0903bc8b7285] Running
I0127 11:52:46.630640 410030 system_pods.go:105] "storage-provisioner" [58b2ed51-7586-457e-a455-1a52afbcc2fd] Running
I0127 11:52:46.630649 410030 system_pods.go:147] duration metric: took 402.266545ms to wait for k8s-apps to be running ...
I0127 11:52:46.630655 410030 system_svc.go:44] waiting for kubelet service to be running ....
I0127 11:52:46.630700 410030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 11:52:46.647032 410030 system_svc.go:56] duration metric: took 16.365202ms WaitForService to wait for kubelet
I0127 11:52:46.647063 410030 kubeadm.go:582] duration metric: took 10.130054313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 11:52:46.647088 410030 node_conditions.go:102] verifying NodePressure condition ...
I0127 11:52:46.828212 410030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 11:52:46.828240 410030 node_conditions.go:123] node cpu capacity is 2
I0127 11:52:46.828255 410030 node_conditions.go:105] duration metric: took 181.16132ms to run NodePressure ...
I0127 11:52:46.828269 410030 start.go:241] waiting for startup goroutines ...
I0127 11:52:46.828280 410030 start.go:246] waiting for cluster config update ...
I0127 11:52:46.828295 410030 start.go:255] writing updated cluster config ...
I0127 11:52:46.828597 410030 ssh_runner.go:195] Run: rm -f paused
I0127 11:52:46.879719 410030 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 11:52:46.881278 410030 out.go:177] * Done! kubectl is now configured to use "bridge-230154" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4a33d428f4f77 523cad1a4df73 4 seconds ago Exited dashboard-metrics-scraper 9 45045ebd5057a dashboard-metrics-scraper-86c6bf9756-k2swq
b9368a2abd7ba 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 4d8859601631b kubernetes-dashboard-7779f9b69b-2c6kt
c005698c25d45 6e38f40d628db 21 minutes ago Running storage-provisioner 0 3e1c9c73a2968 storage-provisioner
4eef6bae239b9 c69fa2e9cbf5f 21 minutes ago Running coredns 0 c93ac2efa24fb coredns-668d6bf9bc-5cktj
6db8e2b9dbed6 c69fa2e9cbf5f 21 minutes ago Running coredns 0 7b054360c4744 coredns-668d6bf9bc-kjqjk
f936328e91f32 e29f9c7391fd9 21 minutes ago Running kube-proxy 0 01916883da50b kube-proxy-44m77
bb73d9fe3729d a9e7e6b294baf 21 minutes ago Running etcd 2 89c7c8b36c50d etcd-no-preload-976043
765852d6ddf17 2b0d6572d062c 21 minutes ago Running kube-scheduler 2 db4bdfdbdadbe kube-scheduler-no-preload-976043
aaea52032a210 95c0bda56fc4d 21 minutes ago Running kube-apiserver 2 809e61c50c175 kube-apiserver-no-preload-976043
4fafe9b41d24a 019ee182b58e2 21 minutes ago Running kube-controller-manager 2 3b1b86b7b9e65 kube-controller-manager-no-preload-976043
==> containerd <==
Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.861997046Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.864171324Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 12:03:08 no-preload-976043 containerd[559]: time="2025-01-27T12:03:08.864261186Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.854027820Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.892407230Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.893654754Z" level=info msg="StartContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
Jan 27 12:03:26 no-preload-976043 containerd[559]: time="2025-01-27T12:03:26.975184957Z" level=info msg="StartContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\" returns successfully"
Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022213505Z" level=info msg="shim disconnected" id=bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb namespace=k8s.io
Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022361690Z" level=warning msg="cleaning up after shim disconnected" id=bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb namespace=k8s.io
Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.022450948Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.555124668Z" level=info msg="RemoveContainer for \"1f0e08ad11074a1d1459ebc1363490d304fb38d6d2ff3731ae14d271c8eb0fa7\""
Jan 27 12:03:27 no-preload-976043 containerd[559]: time="2025-01-27T12:03:27.564200708Z" level=info msg="RemoveContainer for \"1f0e08ad11074a1d1459ebc1363490d304fb38d6d2ff3731ae14d271c8eb0fa7\" returns successfully"
Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.855343700Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.871026681Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.873079989Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 12:08:10 no-preload-976043 containerd[559]: time="2025-01-27T12:08:10.873168589Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.853808420Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.875535910Z" level=info msg="CreateContainer within sandbox \"45045ebd5057a80801127322387e6020ed1b9d72cd06260400445ee1c56bfb57\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\""
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.876598002Z" level=info msg="StartContainer for \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\""
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.949858479Z" level=info msg="StartContainer for \"4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30\" returns successfully"
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995611575Z" level=info msg="shim disconnected" id=4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30 namespace=k8s.io
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995768308Z" level=warning msg="cleaning up after shim disconnected" id=4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30 namespace=k8s.io
Jan 27 12:08:28 no-preload-976043 containerd[559]: time="2025-01-27T12:08:28.995872790Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 12:08:29 no-preload-976043 containerd[559]: time="2025-01-27T12:08:29.246869050Z" level=info msg="RemoveContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\""
Jan 27 12:08:29 no-preload-976043 containerd[559]: time="2025-01-27T12:08:29.252119353Z" level=info msg="RemoveContainer for \"bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb\" returns successfully"
==> coredns [4eef6bae239b90f4992d4b21636d91a4816334e40d073853f0c610ca8e6ff0ba] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [6db8e2b9dbed6e543ea5749ee7b922719309f1e0d1601d1c22528d4d9567869f] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: no-preload-976043
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-976043
kubernetes.io/os=linux
minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
minikube.k8s.io/name=no-preload-976043
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T11_47_02_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 11:46:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: no-preload-976043
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 12:08:29 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 12:04:55 +0000 Mon, 27 Jan 2025 11:46:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 12:04:55 +0000 Mon, 27 Jan 2025 11:46:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 12:04:55 +0000 Mon, 27 Jan 2025 11:46:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 12:04:55 +0000 Mon, 27 Jan 2025 11:46:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.72.171
Hostname: no-preload-976043
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: aada06ce10ef4ecdbcc624ca12030b51
System UUID: aada06ce-10ef-4ecd-bcc6-24ca12030b51
Boot ID: 26eb3504-eb5c-421e-85ec-5c0bf85b4166
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-5cktj 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system coredns-668d6bf9bc-kjqjk 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system etcd-no-preload-976043 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 21m
kube-system kube-apiserver-no-preload-976043 250m (12%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-controller-manager-no-preload-976043 200m (10%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-proxy-44m77 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-scheduler-no-preload-976043 100m (5%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system metrics-server-f79f97bbb-cxprr 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-k2swq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-2c6kt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21m kube-proxy
Normal Starting 21m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m kubelet Node no-preload-976043 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m kubelet Node no-preload-976043 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m kubelet Node no-preload-976043 status is now: NodeHasSufficientPID
Normal RegisteredNode 21m node-controller Node no-preload-976043 event: Registered Node no-preload-976043 in Controller
==> dmesg <==
[ +0.041941] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +5.298744] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.922659] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.611846] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.702018] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
[ +0.064658] kauditd_printk_skb: 1 callbacks suppressed
[ +0.072603] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
[ +0.167173] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
[ +0.161443] systemd-fstab-generator[521]: Ignoring "noauto" option for root device
[ +0.332995] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
[ +1.385910] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
[ +2.283888] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
[ +0.967574] kauditd_printk_skb: 225 callbacks suppressed
[ +5.023864] kauditd_printk_skb: 40 callbacks suppressed
[ +8.365895] kauditd_printk_skb: 80 callbacks suppressed
[Jan27 11:46] systemd-fstab-generator[3032]: Ignoring "noauto" option for root device
[Jan27 11:47] systemd-fstab-generator[3396]: Ignoring "noauto" option for root device
[ +0.087270] kauditd_printk_skb: 87 callbacks suppressed
[ +5.383394] systemd-fstab-generator[3500]: Ignoring "noauto" option for root device
[ +0.141338] kauditd_printk_skb: 12 callbacks suppressed
[ +5.024114] kauditd_printk_skb: 100 callbacks suppressed
[ +5.061162] kauditd_printk_skb: 5 callbacks suppressed
[ +5.052190] kauditd_printk_skb: 2 callbacks suppressed
==> etcd [bb73d9fe3729da81efffc2bbac4d8fed9055414e43f045b96bdc838a83b600eb] <==
{"level":"info","ts":"2025-01-27T11:50:29.173841Z","caller":"traceutil/trace.go:171","msg":"trace[1667898271] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"456.979186ms","start":"2025-01-27T11:50:28.716848Z","end":"2025-01-27T11:50:29.173827Z","steps":["trace[1667898271] 'process raft request' (duration: 357.258956ms)","trace[1667898271] 'compare' (duration: 96.956113ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T11:50:29.174336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:50:28.716824Z","time spent":"457.36293ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" mod_revision:663 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" value_size:4396 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq\" > >"}
{"level":"warn","ts":"2025-01-27T11:50:29.173390Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T11:50:28.846088Z","time spent":"326.778742ms","remote":"127.0.0.1:48226","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-01-27T11:50:29.675416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.801592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T11:50:29.675644Z","caller":"traceutil/trace.go:171","msg":"trace[1160123513] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:751; }","duration":"230.075684ms","start":"2025-01-27T11:50:29.445549Z","end":"2025-01-27T11:50:29.675625Z","steps":["trace[1160123513] 'range keys from in-memory index tree' (duration: 229.734866ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T11:50:29.676823Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.208841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T11:50:29.676903Z","caller":"traceutil/trace.go:171","msg":"trace[552255097] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:751; }","duration":"187.287286ms","start":"2025-01-27T11:50:29.489596Z","end":"2025-01-27T11:50:29.676884Z","steps":["trace[552255097] 'range keys from in-memory index tree' (duration: 185.856232ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T11:51:18.349064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.29434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T11:51:18.349214Z","caller":"traceutil/trace.go:171","msg":"trace[766871959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"105.497777ms","start":"2025-01-27T11:51:18.243663Z","end":"2025-01-27T11:51:18.349160Z","steps":["trace[766871959] 'range keys from in-memory index tree' (duration: 105.242856ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T11:52:20.780108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.055559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T11:52:20.780824Z","caller":"traceutil/trace.go:171","msg":"trace[650371020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:860; }","duration":"138.762728ms","start":"2025-01-27T11:52:20.641997Z","end":"2025-01-27T11:52:20.780759Z","steps":["trace[650371020] 'range keys from in-memory index tree' (duration: 137.948442ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T11:52:21.425865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.203102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2738070545962608430 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.171\" mod_revision:850 > success:<request_put:<key:\"/registry/masterleases/192.168.72.171\" value_size:67 lease:2738070545962608428 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.171\" > >>","response":"size:16"}
{"level":"info","ts":"2025-01-27T11:52:21.426418Z","caller":"traceutil/trace.go:171","msg":"trace[1573745400] linearizableReadLoop","detail":"{readStateIndex:939; appliedIndex:938; }","duration":"185.17421ms","start":"2025-01-27T11:52:21.241150Z","end":"2025-01-27T11:52:21.426325Z","steps":["trace[1573745400] 'read index received' (duration: 57.266863ms)","trace[1573745400] 'applied index is now lower than readState.Index' (duration: 127.905786ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T11:52:21.426615Z","caller":"traceutil/trace.go:171","msg":"trace[1380050990] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"256.249513ms","start":"2025-01-27T11:52:21.170346Z","end":"2025-01-27T11:52:21.426595Z","steps":["trace[1380050990] 'process raft request' (duration: 128.167227ms)","trace[1380050990] 'compare' (duration: 127.091778ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T11:52:21.427502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.99366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T11:52:21.428091Z","caller":"traceutil/trace.go:171","msg":"trace[1365611756] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:861; }","duration":"186.952002ms","start":"2025-01-27T11:52:21.241129Z","end":"2025-01-27T11:52:21.428081Z","steps":["trace[1365611756] 'agreement among raft nodes before linearized reading' (duration: 185.602495ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T11:56:58.051917Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":837}
{"level":"info","ts":"2025-01-27T11:56:58.095736Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":837,"took":"42.699334ms","hash":2742367256,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3031040,"current-db-size-in-use":"3.0 MB"}
{"level":"info","ts":"2025-01-27T11:56:58.095879Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2742367256,"revision":837,"compact-revision":-1}
{"level":"info","ts":"2025-01-27T12:01:58.059293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1089}
{"level":"info","ts":"2025-01-27T12:01:58.063974Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1089,"took":"3.718738ms","hash":81811223,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T12:01:58.064150Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":81811223,"revision":1089,"compact-revision":837}
{"level":"info","ts":"2025-01-27T12:06:58.065082Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1340}
{"level":"info","ts":"2025-01-27T12:06:58.069083Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1340,"took":"3.489878ms","hash":2963485821,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1794048,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T12:06:58.069163Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2963485821,"revision":1340,"compact-revision":1089}
==> kernel <==
12:08:34 up 26 min, 0 users, load average: 0.28, 0.26, 0.20
Linux no-preload-976043 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [aaea52032a21044a8697f1cf67f1a61c3d4078d96bae383657162aa6dfe46e4c] <==
> logger="UnhandledError"
I0127 12:05:00.468043 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 12:06:59.467125 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:06:59.467400 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 12:07:00.469666 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:07:00.469733 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0127 12:07:00.469763 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:07:00.470053 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 12:07:00.471115 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 12:07:00.471146 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 12:08:00.472046 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:08:00.472268 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 12:08:00.472180 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 12:08:00.472624 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I0127 12:08:00.473797 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 12:08:00.473868 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [4fafe9b41d24a0b36339d5cb43a3023744ee747d9f0d780743ce9cc91f21e4b7] <==
I0127 12:03:32.863007 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="39.273µs"
E0127 12:03:36.228202 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:03:36.279705 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:04:06.234112 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:04:06.287367 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:04:36.240589 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:04:36.294322 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 12:04:55.161696 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-976043"
E0127 12:05:06.246599 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:05:06.300552 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:05:36.252919 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:05:36.308138 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:06:06.261277 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:06:06.317257 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:06:36.269445 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:06:36.325273 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:07:06.276054 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:07:06.332056 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:07:36.281988 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:07:36.338522 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 12:08:06.290040 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 12:08:06.346210 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 12:08:25.869906 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="119.632µs"
I0127 12:08:29.264385 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="60.153µs"
I0127 12:08:31.994118 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="43.466µs"
==> kube-proxy [f936328e91f32ea805970efb2793e458dc0b62c4c3de292ca1926ef86e0773f6] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 11:47:08.286258 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 11:47:08.326710 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.171"]
E0127 11:47:08.326839 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 11:47:08.518150 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 11:47:08.518198 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 11:47:08.518224 1 server_linux.go:170] "Using iptables Proxier"
I0127 11:47:08.523143 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 11:47:08.527631 1 server.go:497] "Version info" version="v1.32.1"
I0127 11:47:08.527663 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 11:47:08.532438 1 config.go:199] "Starting service config controller"
I0127 11:47:08.532547 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 11:47:08.532586 1 config.go:105] "Starting endpoint slice config controller"
I0127 11:47:08.532592 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 11:47:08.533130 1 config.go:329] "Starting node config controller"
I0127 11:47:08.533140 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 11:47:08.636582 1 shared_informer.go:320] Caches are synced for node config
I0127 11:47:08.636629 1 shared_informer.go:320] Caches are synced for service config
I0127 11:47:08.636638 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [765852d6ddf176224b3ad9dbebd8640d778f3694ba556d6351fa92740cfd5c40] <==
W0127 11:46:59.490179 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 11:46:59.490212 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 11:46:59.490417 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 11:46:59.490494 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.338393 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0127 11:47:00.338440 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.363620 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 11:47:00.363672 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.461212 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 11:47:00.461282 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.467537 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0127 11:47:00.467593 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.501400 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 11:47:00.501527 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.568080 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0127 11:47:00.568389 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.578072 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0127 11:47:00.578661 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.605529 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 11:47:00.605830 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.643250 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 11:47:00.643730 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 11:47:00.700651 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 11:47:00.700895 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0127 11:47:03.372561 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 12:07:37 no-preload-976043 kubelet[3403]: E0127 12:07:37.850846 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
Jan 27 12:07:40 no-preload-976043 kubelet[3403]: E0127 12:07:40.850994 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
Jan 27 12:07:49 no-preload-976043 kubelet[3403]: I0127 12:07:49.852785 3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
Jan 27 12:07:49 no-preload-976043 kubelet[3403]: E0127 12:07:49.853243 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
Jan 27 12:07:55 no-preload-976043 kubelet[3403]: E0127 12:07:55.853285 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
Jan 27 12:08:01 no-preload-976043 kubelet[3403]: E0127 12:08:01.878224 3403 iptables.go:577] "Could not set up iptables canary" err=<
Jan 27 12:08:01 no-preload-976043 kubelet[3403]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 27 12:08:01 no-preload-976043 kubelet[3403]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 27 12:08:01 no-preload-976043 kubelet[3403]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 27 12:08:01 no-preload-976043 kubelet[3403]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 27 12:08:03 no-preload-976043 kubelet[3403]: I0127 12:08:03.850704 3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
Jan 27 12:08:03 no-preload-976043 kubelet[3403]: E0127 12:08:03.852958 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.873533 3403 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.874306 3403 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.874870 3403 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jw9tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-cxprr_kube-system(fcf4fd1c-5cc8-43ab-a46a-32c4f5559168): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 27 12:08:10 no-preload-976043 kubelet[3403]: E0127 12:08:10.876703 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
Jan 27 12:08:14 no-preload-976043 kubelet[3403]: I0127 12:08:14.850209 3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
Jan 27 12:08:14 no-preload-976043 kubelet[3403]: E0127 12:08:14.850395 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
Jan 27 12:08:25 no-preload-976043 kubelet[3403]: E0127 12:08:25.850968 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cxprr" podUID="fcf4fd1c-5cc8-43ab-a46a-32c4f5559168"
Jan 27 12:08:28 no-preload-976043 kubelet[3403]: I0127 12:08:28.850412 3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
Jan 27 12:08:29 no-preload-976043 kubelet[3403]: I0127 12:08:29.244277 3403 scope.go:117] "RemoveContainer" containerID="bbdc6a1cad758c5db919b30fd742693c6e9d46a4048b2e6f4aca646a3a9ed0bb"
Jan 27 12:08:29 no-preload-976043 kubelet[3403]: I0127 12:08:29.244918 3403 scope.go:117] "RemoveContainer" containerID="4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30"
Jan 27 12:08:29 no-preload-976043 kubelet[3403]: E0127 12:08:29.245151 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
Jan 27 12:08:31 no-preload-976043 kubelet[3403]: I0127 12:08:31.976638 3403 scope.go:117] "RemoveContainer" containerID="4a33d428f4f77088c0fe4dc8dc83c37f2b94ad54fa3c966ba47b67e1e3be5b30"
Jan 27 12:08:31 no-preload-976043 kubelet[3403]: E0127 12:08:31.976862 3403 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2swq_kubernetes-dashboard(50c34ab2-9bca-4cc6-a360-5de0898bfab9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2swq" podUID="50c34ab2-9bca-4cc6-a360-5de0898bfab9"
==> kubernetes-dashboard [b9368a2abd7ba22861a95efcc12a6cc204126f8ea0ff3e0ccd83405833df76a9] <==
2025/01/27 11:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:56:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:57:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:57:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:58:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:58:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:59:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 11:59:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:00:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:00:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:03:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:03:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:04:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:04:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:05:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:05:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:06:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:06:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:07:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:07:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 12:08:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [c005698c25d4503489975c78a07c506bad86865b449b9f2471a3f1bf1c7fc878] <==
I0127 11:47:09.725684 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 11:47:09.739243 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 11:47:09.739332 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 11:47:09.754757 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 11:47:09.755961 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696!
I0127 11:47:09.760243 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bac9279e-01a7-4e0c-b034-618db64da2f3", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696 became leader
I0127 11:47:09.856548 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-976043_a13ba666-369f-4b7e-a067-7b35fb475696!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-976043 -n no-preload-976043
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-976043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-cxprr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr: exit status 1 (68.988158ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-cxprr" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-976043 describe pod metrics-server-f79f97bbb-cxprr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1591.89s)