=== RUN TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 14:13:24.636322 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/kindnet-723599/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:34.232014 1806070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/auto-723599/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (27m11.119626185s)
-- stdout --
* [embed-certs-635679] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20327
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "embed-certs-635679" primary control-plane node in "embed-certs-635679" cluster
* Restarting existing kvm2 VM for "embed-certs-635679" ...
* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-635679 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0127 14:13:21.155797 1860210 out.go:345] Setting OutFile to fd 1 ...
I0127 14:13:21.155930 1860210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:13:21.155943 1860210 out.go:358] Setting ErrFile to fd 2...
I0127 14:13:21.155949 1860210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:13:21.156129 1860210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 14:13:21.156671 1860210 out.go:352] Setting JSON to false
I0127 14:13:21.157747 1860210 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39342,"bootTime":1737947859,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 14:13:21.157863 1860210 start.go:139] virtualization: kvm guest
I0127 14:13:21.160045 1860210 out.go:177] * [embed-certs-635679] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 14:13:21.161168 1860210 out.go:177] - MINIKUBE_LOCATION=20327
I0127 14:13:21.161170 1860210 notify.go:220] Checking for updates...
I0127 14:13:21.163620 1860210 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 14:13:21.164982 1860210 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:13:21.166215 1860210 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
I0127 14:13:21.167350 1860210 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 14:13:21.168478 1860210 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 14:13:21.169839 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:13:21.170231 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:13:21.170290 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:13:21.185178 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
I0127 14:13:21.185570 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:13:21.186187 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:13:21.186208 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:13:21.186553 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:13:21.186758 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:21.187052 1860210 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 14:13:21.187370 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:13:21.187420 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:13:21.202267 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
I0127 14:13:21.202785 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:13:21.203261 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:13:21.203283 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:13:21.203584 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:13:21.203776 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:21.239051 1860210 out.go:177] * Using the kvm2 driver based on existing profile
I0127 14:13:21.240262 1860210 start.go:297] selected driver: kvm2
I0127 14:13:21.240276 1860210 start.go:901] validating driver "kvm2" against &{Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:13:21.240388 1860210 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 14:13:21.241030 1860210 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:13:21.241112 1860210 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 14:13:21.256194 1860210 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
I0127 14:13:21.256583 1860210 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 14:13:21.256621 1860210 cni.go:84] Creating CNI manager for ""
I0127 14:13:21.256669 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:13:21.256708 1860210 start.go:340] cluster config:
{Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:13:21.256817 1860210 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:13:21.258831 1860210 out.go:177] * Starting "embed-certs-635679" primary control-plane node in "embed-certs-635679" cluster
I0127 14:13:21.260025 1860210 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 14:13:21.260062 1860210 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 14:13:21.260069 1860210 cache.go:56] Caching tarball of preloaded images
I0127 14:13:21.260176 1860210 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 14:13:21.260187 1860210 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 14:13:21.260319 1860210 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/config.json ...
I0127 14:13:21.260495 1860210 start.go:360] acquireMachinesLock for embed-certs-635679: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 14:13:21.260541 1860210 start.go:364] duration metric: took 28.059µs to acquireMachinesLock for "embed-certs-635679"
I0127 14:13:21.260559 1860210 start.go:96] Skipping create...Using existing machine configuration
I0127 14:13:21.260569 1860210 fix.go:54] fixHost starting:
I0127 14:13:21.260853 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:13:21.260892 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:13:21.274983 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
I0127 14:13:21.275451 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:13:21.275954 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:13:21.275977 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:13:21.276307 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:13:21.276503 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:21.276660 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:13:21.278297 1860210 fix.go:112] recreateIfNeeded on embed-certs-635679: state=Stopped err=<nil>
I0127 14:13:21.278324 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
W0127 14:13:21.278486 1860210 fix.go:138] unexpected machine state, will restart: <nil>
I0127 14:13:21.280608 1860210 out.go:177] * Restarting existing kvm2 VM for "embed-certs-635679" ...
I0127 14:13:21.282118 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Start
I0127 14:13:21.282296 1860210 main.go:141] libmachine: (embed-certs-635679) starting domain...
I0127 14:13:21.282314 1860210 main.go:141] libmachine: (embed-certs-635679) ensuring networks are active...
I0127 14:13:21.283192 1860210 main.go:141] libmachine: (embed-certs-635679) Ensuring network default is active
I0127 14:13:21.283525 1860210 main.go:141] libmachine: (embed-certs-635679) Ensuring network mk-embed-certs-635679 is active
I0127 14:13:21.283901 1860210 main.go:141] libmachine: (embed-certs-635679) getting domain XML...
I0127 14:13:21.284658 1860210 main.go:141] libmachine: (embed-certs-635679) creating domain...
I0127 14:13:22.486225 1860210 main.go:141] libmachine: (embed-certs-635679) waiting for IP...
I0127 14:13:22.487188 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:22.487655 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:22.487730 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.487644 1860245 retry.go:31] will retry after 224.272713ms: waiting for domain to come up
I0127 14:13:22.713260 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:22.713864 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:22.713898 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.713801 1860245 retry.go:31] will retry after 258.194373ms: waiting for domain to come up
I0127 14:13:22.973378 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:22.973976 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:22.974011 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:22.973915 1860245 retry.go:31] will retry after 393.696938ms: waiting for domain to come up
I0127 14:13:23.369588 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:23.370128 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:23.370157 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:23.370080 1860245 retry.go:31] will retry after 521.788404ms: waiting for domain to come up
I0127 14:13:23.893538 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:23.894120 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:23.894153 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:23.894072 1860245 retry.go:31] will retry after 746.089871ms: waiting for domain to come up
I0127 14:13:24.641317 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:24.641869 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:24.641896 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:24.641827 1860245 retry.go:31] will retry after 894.333313ms: waiting for domain to come up
I0127 14:13:25.537589 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:25.538102 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:25.538133 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:25.538046 1860245 retry.go:31] will retry after 974.563517ms: waiting for domain to come up
I0127 14:13:26.514194 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:26.514729 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:26.514773 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:26.514693 1860245 retry.go:31] will retry after 1.359543608s: waiting for domain to come up
I0127 14:13:27.876285 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:27.876898 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:27.876932 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:27.876828 1860245 retry.go:31] will retry after 1.168162945s: waiting for domain to come up
I0127 14:13:29.047085 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:29.047663 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:29.047710 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:29.047643 1860245 retry.go:31] will retry after 2.191940383s: waiting for domain to come up
I0127 14:13:31.240972 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:31.241466 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:31.241492 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:31.241437 1860245 retry.go:31] will retry after 1.80110911s: waiting for domain to come up
I0127 14:13:33.044812 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:33.045257 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:33.045288 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:33.045243 1860245 retry.go:31] will retry after 2.233702385s: waiting for domain to come up
I0127 14:13:35.281578 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:35.282187 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | unable to find current IP address of domain embed-certs-635679 in network mk-embed-certs-635679
I0127 14:13:35.282213 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | I0127 14:13:35.282118 1860245 retry.go:31] will retry after 3.504793306s: waiting for domain to come up
I0127 14:13:38.788161 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:38.788602 1860210 main.go:141] libmachine: (embed-certs-635679) found domain IP: 192.168.61.180
I0127 14:13:38.788627 1860210 main.go:141] libmachine: (embed-certs-635679) reserving static IP address...
I0127 14:13:38.788642 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has current primary IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:38.789050 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "embed-certs-635679", mac: "52:54:00:84:cf:47", ip: "192.168.61.180"} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:38.789105 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | skip adding static IP to network mk-embed-certs-635679 - found existing host DHCP lease matching {name: "embed-certs-635679", mac: "52:54:00:84:cf:47", ip: "192.168.61.180"}
I0127 14:13:38.789129 1860210 main.go:141] libmachine: (embed-certs-635679) reserved static IP address 192.168.61.180 for domain embed-certs-635679
I0127 14:13:38.789153 1860210 main.go:141] libmachine: (embed-certs-635679) waiting for SSH...
I0127 14:13:38.789176 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Getting to WaitForSSH function...
I0127 14:13:38.791170 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:38.791460 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:38.791483 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:38.791606 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Using SSH client type: external
I0127 14:13:38.791654 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa (-rw-------)
I0127 14:13:38.791695 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 14:13:38.791712 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | About to run SSH command:
I0127 14:13:38.791725 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | exit 0
I0127 14:13:38.915087 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | SSH cmd err, output: <nil>:
I0127 14:13:39.454657 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetConfigRaw
I0127 14:13:39.455493 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
I0127 14:13:39.458697 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.459119 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.459163 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.459408 1860210 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/config.json ...
I0127 14:13:39.459597 1860210 machine.go:93] provisionDockerMachine start ...
I0127 14:13:39.459619 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:39.459816 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:39.463084 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.463500 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.463532 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.463700 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:39.463873 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.464041 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.464209 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:39.464372 1860210 main.go:141] libmachine: Using SSH client type: native
I0127 14:13:39.464572 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.180 22 <nil> <nil>}
I0127 14:13:39.464583 1860210 main.go:141] libmachine: About to run SSH command:
hostname
I0127 14:13:39.574932 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 14:13:39.574977 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
I0127 14:13:39.575205 1860210 buildroot.go:166] provisioning hostname "embed-certs-635679"
I0127 14:13:39.575229 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
I0127 14:13:39.575428 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:39.578257 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.578665 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.578689 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.578901 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:39.579108 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.579270 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.579419 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:39.579576 1860210 main.go:141] libmachine: Using SSH client type: native
I0127 14:13:39.579818 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.180 22 <nil> <nil>}
I0127 14:13:39.579839 1860210 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-635679 && echo "embed-certs-635679" | sudo tee /etc/hostname
I0127 14:13:39.700628 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-635679
I0127 14:13:39.700666 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:39.703524 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.704220 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.704271 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.704474 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:39.704676 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.704810 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.704914 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:39.705085 1860210 main.go:141] libmachine: Using SSH client type: native
I0127 14:13:39.705274 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.180 22 <nil> <nil>}
I0127 14:13:39.705297 1860210 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-635679' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-635679/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-635679' | sudo tee -a /etc/hosts;
fi
fi
I0127 14:13:39.828188 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 14:13:39.828221 1860210 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
I0127 14:13:39.828251 1860210 buildroot.go:174] setting up certificates
I0127 14:13:39.828269 1860210 provision.go:84] configureAuth start
I0127 14:13:39.828290 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetMachineName
I0127 14:13:39.828584 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
I0127 14:13:39.831539 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.831969 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.831999 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.832067 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:39.834211 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.834550 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.834590 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.834734 1860210 provision.go:143] copyHostCerts
I0127 14:13:39.834812 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
I0127 14:13:39.834830 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
I0127 14:13:39.834891 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
I0127 14:13:39.835038 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
I0127 14:13:39.835049 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
I0127 14:13:39.835074 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
I0127 14:13:39.835146 1860210 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
I0127 14:13:39.835158 1860210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
I0127 14:13:39.835180 1860210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
I0127 14:13:39.835234 1860210 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-635679 san=[127.0.0.1 192.168.61.180 embed-certs-635679 localhost minikube]
I0127 14:13:39.923744 1860210 provision.go:177] copyRemoteCerts
I0127 14:13:39.923816 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 14:13:39.923848 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:39.926360 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.926658 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:39.926687 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:39.926919 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:39.927098 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:39.927246 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:39.927368 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:13:40.008198 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 14:13:40.030294 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0127 14:13:40.051055 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 14:13:40.072532 1860210 provision.go:87] duration metric: took 244.24352ms to configureAuth
I0127 14:13:40.072578 1860210 buildroot.go:189] setting minikube options for container-runtime
I0127 14:13:40.072788 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:13:40.072804 1860210 machine.go:96] duration metric: took 613.194376ms to provisionDockerMachine
I0127 14:13:40.072813 1860210 start.go:293] postStartSetup for "embed-certs-635679" (driver="kvm2")
I0127 14:13:40.072825 1860210 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 14:13:40.072852 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:40.073149 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 14:13:40.073178 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:40.075877 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.076210 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:40.076301 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.076446 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:40.076649 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:40.076842 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:40.076978 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:13:40.156185 1860210 ssh_runner.go:195] Run: cat /etc/os-release
I0127 14:13:40.160264 1860210 info.go:137] Remote host: Buildroot 2023.02.9
I0127 14:13:40.160295 1860210 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
I0127 14:13:40.160368 1860210 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
I0127 14:13:40.160463 1860210 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
I0127 14:13:40.160580 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 14:13:40.168956 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
I0127 14:13:40.190965 1860210 start.go:296] duration metric: took 118.133051ms for postStartSetup
I0127 14:13:40.191014 1860210 fix.go:56] duration metric: took 18.93044406s for fixHost
I0127 14:13:40.191043 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:40.193676 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.194047 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:40.194077 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.194205 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:40.194406 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:40.194535 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:40.194667 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:40.194824 1860210 main.go:141] libmachine: Using SSH client type: native
I0127 14:13:40.195027 1860210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.180 22 <nil> <nil>}
I0127 14:13:40.195040 1860210 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 14:13:40.299552 1860210 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987220.275198748
I0127 14:13:40.299576 1860210 fix.go:216] guest clock: 1737987220.275198748
I0127 14:13:40.299583 1860210 fix.go:229] Guest: 2025-01-27 14:13:40.275198748 +0000 UTC Remote: 2025-01-27 14:13:40.191018899 +0000 UTC m=+19.075426547 (delta=84.179849ms)
I0127 14:13:40.299608 1860210 fix.go:200] guest clock delta is within tolerance: 84.179849ms
I0127 14:13:40.299615 1860210 start.go:83] releasing machines lock for "embed-certs-635679", held for 19.039062058s
I0127 14:13:40.299676 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:40.299993 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
I0127 14:13:40.302964 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.303339 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:40.303373 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.303518 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:40.304033 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:40.304226 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:13:40.304347 1860210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 14:13:40.304392 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:40.304399 1860210 ssh_runner.go:195] Run: cat /version.json
I0127 14:13:40.304437 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:13:40.307285 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.307612 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.307688 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:40.307709 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.307894 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:40.308042 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:40.308069 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:40.308109 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:40.308314 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:13:40.308322 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:40.308479 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:13:40.308670 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:13:40.308719 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:13:40.308823 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:13:40.416920 1860210 ssh_runner.go:195] Run: systemctl --version
I0127 14:13:40.422621 1860210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 14:13:40.427810 1860210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 14:13:40.427863 1860210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 14:13:40.442459 1860210 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 14:13:40.442486 1860210 start.go:495] detecting cgroup driver to use...
I0127 14:13:40.442564 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 14:13:40.472735 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 14:13:40.487526 1860210 docker.go:217] disabling cri-docker service (if available) ...
I0127 14:13:40.487581 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 14:13:40.500662 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 14:13:40.514200 1860210 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 14:13:40.637821 1860210 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 14:13:40.782905 1860210 docker.go:233] disabling docker service ...
I0127 14:13:40.782978 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 14:13:40.796697 1860210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 14:13:40.808719 1860210 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 14:13:40.941152 1860210 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 14:13:41.056187 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 14:13:41.069051 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 14:13:41.085641 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 14:13:41.094778 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 14:13:41.105068 1860210 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 14:13:41.105126 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 14:13:41.118970 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 14:13:41.129142 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 14:13:41.139297 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 14:13:41.148963 1860210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 14:13:41.159097 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 14:13:41.168571 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 14:13:41.178272 1860210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 14:13:41.187611 1860210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 14:13:41.196779 1860210 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 14:13:41.196835 1860210 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 14:13:41.209411 1860210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 14:13:41.217986 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:13:41.331662 1860210 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 14:13:41.359894 1860210 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 14:13:41.359985 1860210 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 14:13:41.363948 1860210 retry.go:31] will retry after 579.51809ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 14:13:41.943710 1860210 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 14:13:41.948775 1860210 start.go:563] Will wait 60s for crictl version
I0127 14:13:41.948834 1860210 ssh_runner.go:195] Run: which crictl
I0127 14:13:41.952580 1860210 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 14:13:41.989078 1860210 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 14:13:41.989184 1860210 ssh_runner.go:195] Run: containerd --version
I0127 14:13:42.014553 1860210 ssh_runner.go:195] Run: containerd --version
I0127 14:13:42.039584 1860210 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 14:13:42.040834 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetIP
I0127 14:13:42.044160 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:42.044561 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:13:42.044593 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:13:42.044836 1860210 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0127 14:13:42.048820 1860210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 14:13:42.061014 1860210 kubeadm.go:883] updating cluster {Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 14:13:42.061136 1860210 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 14:13:42.061189 1860210 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:13:42.104465 1860210 containerd.go:627] all images are preloaded for containerd runtime.
I0127 14:13:42.104489 1860210 containerd.go:534] Images already preloaded, skipping extraction
I0127 14:13:42.104539 1860210 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:13:42.140076 1860210 containerd.go:627] all images are preloaded for containerd runtime.
I0127 14:13:42.140103 1860210 cache_images.go:84] Images are preloaded, skipping loading
I0127 14:13:42.140117 1860210 kubeadm.go:934] updating node { 192.168.61.180 8443 v1.32.1 containerd true true} ...
I0127 14:13:42.140295 1860210 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-635679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.180
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 14:13:42.140367 1860210 ssh_runner.go:195] Run: sudo crictl info
I0127 14:13:42.173422 1860210 cni.go:84] Creating CNI manager for ""
I0127 14:13:42.173454 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:13:42.173470 1860210 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 14:13:42.173502 1860210 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.180 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-635679 NodeName:embed-certs-635679 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 14:13:42.173687 1860210 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.180
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-635679"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.61.180"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.180"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 14:13:42.173767 1860210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 14:13:42.184900 1860210 binaries.go:44] Found k8s binaries, skipping transfer
I0127 14:13:42.184991 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 14:13:42.194622 1860210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I0127 14:13:42.210525 1860210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 14:13:42.226019 1860210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
I0127 14:13:42.241933 1860210 ssh_runner.go:195] Run: grep 192.168.61.180 control-plane.minikube.internal$ /etc/hosts
I0127 14:13:42.245391 1860210 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.180 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 14:13:42.256498 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:13:42.375107 1860210 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 14:13:42.397661 1860210 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679 for IP: 192.168.61.180
I0127 14:13:42.397701 1860210 certs.go:194] generating shared ca certs ...
I0127 14:13:42.397747 1860210 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:13:42.397956 1860210 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
I0127 14:13:42.398069 1860210 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
I0127 14:13:42.398092 1860210 certs.go:256] generating profile certs ...
I0127 14:13:42.398253 1860210 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/client.key
I0127 14:13:42.398340 1860210 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.key.c3222ec9
I0127 14:13:42.398404 1860210 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.key
I0127 14:13:42.398585 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
W0127 14:13:42.398626 1860210 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
I0127 14:13:42.398640 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
I0127 14:13:42.398671 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
I0127 14:13:42.398704 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
I0127 14:13:42.398735 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
I0127 14:13:42.398828 1860210 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
I0127 14:13:42.399837 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 14:13:42.433852 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 14:13:42.458311 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 14:13:42.481339 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 14:13:42.508328 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0127 14:13:42.540137 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 14:13:42.568660 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 14:13:42.591132 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/embed-certs-635679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 14:13:42.616298 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
I0127 14:13:42.641456 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 14:13:42.667039 1860210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
I0127 14:13:42.690033 1860210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 14:13:42.707437 1860210 ssh_runner.go:195] Run: openssl version
I0127 14:13:42.713417 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
I0127 14:13:42.724271 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
I0127 14:13:42.728246 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
I0127 14:13:42.728300 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
I0127 14:13:42.734063 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
I0127 14:13:42.744802 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 14:13:42.755448 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 14:13:42.761015 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
I0127 14:13:42.761067 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 14:13:42.768368 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 14:13:42.778726 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
I0127 14:13:42.788563 1860210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
I0127 14:13:42.792702 1860210 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
I0127 14:13:42.792758 1860210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
I0127 14:13:42.798170 1860210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
I0127 14:13:42.807686 1860210 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 14:13:42.811838 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 14:13:42.817410 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 14:13:42.822851 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 14:13:42.828321 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 14:13:42.833665 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 14:13:42.839354 1860210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 14:13:42.844998 1860210 kubeadm.go:392] StartCluster: {Name:embed-certs-635679 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-635679 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:13:42.845087 1860210 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 14:13:42.845151 1860210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 14:13:42.884238 1860210 cri.go:89] found id: "32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3"
I0127 14:13:42.884264 1860210 cri.go:89] found id: "4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3"
I0127 14:13:42.884269 1860210 cri.go:89] found id: "2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0"
I0127 14:13:42.884272 1860210 cri.go:89] found id: "c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256"
I0127 14:13:42.884275 1860210 cri.go:89] found id: "57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027"
I0127 14:13:42.884279 1860210 cri.go:89] found id: "fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34"
I0127 14:13:42.884283 1860210 cri.go:89] found id: "1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3"
I0127 14:13:42.884287 1860210 cri.go:89] found id: ""
I0127 14:13:42.884361 1860210 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 14:13:42.899419 1860210 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T14:13:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 14:13:42.899510 1860210 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 14:13:42.910122 1860210 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 14:13:42.910145 1860210 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 14:13:42.910195 1860210 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 14:13:42.919020 1860210 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 14:13:42.919798 1860210 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-635679" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:13:42.920141 1860210 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-635679" cluster setting kubeconfig missing "embed-certs-635679" context setting]
I0127 14:13:42.920780 1860210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:13:42.922301 1860210 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 14:13:42.931572 1860210 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.180
I0127 14:13:42.931609 1860210 kubeadm.go:1160] stopping kube-system containers ...
I0127 14:13:42.931623 1860210 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 14:13:42.931679 1860210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 14:13:42.973261 1860210 cri.go:89] found id: "32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3"
I0127 14:13:42.973291 1860210 cri.go:89] found id: "4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3"
I0127 14:13:42.973298 1860210 cri.go:89] found id: "2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0"
I0127 14:13:42.973304 1860210 cri.go:89] found id: "c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256"
I0127 14:13:42.973308 1860210 cri.go:89] found id: "57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027"
I0127 14:13:42.973313 1860210 cri.go:89] found id: "fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34"
I0127 14:13:42.973317 1860210 cri.go:89] found id: "1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3"
I0127 14:13:42.973321 1860210 cri.go:89] found id: ""
I0127 14:13:42.973327 1860210 cri.go:252] Stopping containers: [32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3 4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3 2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0 c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256 57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027 fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34 1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3]
I0127 14:13:42.973384 1860210 ssh_runner.go:195] Run: which crictl
I0127 14:13:42.977408 1860210 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 32f7119679a11962824e58e5b6f1deebde8552cd40a9aef87743a355e2c311e3 4edb9bf088a7d0b36405d0de8e3c5989acb34e7042d9021c696e3321f488b9a3 2af6cfde618af2b9e79131c949655785d535f681dd62ce0209b62e62574e16b0 c0e7fafaca98c2b01e296a632a5b08e714c3f8b71473add8914de864dc58a256 57bb5c43279ffb0ccc598e65301ecf90861c8a88230919c92f86bbc8b9990027 fb1d1cc0a1ab37ae8dfdef1aaec53e04c544a0bdff9efcaf81b043bad63cac34 1536cb7c9e5e69c77ec5eaffae1da0a1a546fbef499fa7e963764811204997d3
I0127 14:13:43.019472 1860210 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 14:13:43.035447 1860210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 14:13:43.044399 1860210 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 14:13:43.044428 1860210 kubeadm.go:157] found existing configuration files:
I0127 14:13:43.044484 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 14:13:43.052786 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 14:13:43.052850 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 14:13:43.061481 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 14:13:43.070018 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 14:13:43.070077 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 14:13:43.079149 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 14:13:43.087642 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 14:13:43.087691 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 14:13:43.096876 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 14:13:43.105836 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 14:13:43.105898 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 14:13:43.114179 1860210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 14:13:43.123378 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:43.253719 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:44.859245 1860210 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.605475862s)
I0127 14:13:44.859296 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:45.067517 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:45.156357 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:45.250349 1860210 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:13:45.250445 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:13:45.751433 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:13:46.250930 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:13:46.269503 1860210 api_server.go:72] duration metric: took 1.019153447s to wait for apiserver process to appear ...
I0127 14:13:46.269536 1860210 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:13:46.269562 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:46.270172 1860210 api_server.go:269] stopped: https://192.168.61.180:8443/healthz: Get "https://192.168.61.180:8443/healthz": dial tcp 192.168.61.180:8443: connect: connection refused
I0127 14:13:46.770303 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:48.602517 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 14:13:48.602550 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 14:13:48.602568 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:48.630699 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 14:13:48.630753 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 14:13:48.770158 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:48.776132 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:13:48.776165 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:13:49.269810 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:49.283288 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:13:49.283333 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:13:49.770613 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:49.781512 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:13:49.781556 1860210 api_server.go:103] status: https://192.168.61.180:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:13:50.270274 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:13:50.276610 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 200:
ok
I0127 14:13:50.285654 1860210 api_server.go:141] control plane version: v1.32.1
I0127 14:13:50.285703 1860210 api_server.go:131] duration metric: took 4.01615716s to wait for apiserver health ...
I0127 14:13:50.285716 1860210 cni.go:84] Creating CNI manager for ""
I0127 14:13:50.285725 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:13:50.287872 1860210 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 14:13:50.289432 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 14:13:50.300066 1860210 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 14:13:50.328085 1860210 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:13:50.341514 1860210 system_pods.go:59] 8 kube-system pods found
I0127 14:13:50.341585 1860210 system_pods.go:61] "coredns-668d6bf9bc-xx6ks" [ae9e15c0-59d8-4285-b8bb-94b70a9ebc40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:13:50.341603 1860210 system_pods.go:61] "etcd-embed-certs-635679" [927e5a6c-7d19-4555-86eb-d567f3ce4a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 14:13:50.341617 1860210 system_pods.go:61] "kube-apiserver-embed-certs-635679" [4ca30362-b3d5-47ce-ae6e-6c0c5d8b29e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 14:13:50.341634 1860210 system_pods.go:61] "kube-controller-manager-embed-certs-635679" [af0fa1a5-481a-44d4-9965-f49aeb50d944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 14:13:50.341644 1860210 system_pods.go:61] "kube-proxy-8cwvc" [66c2e806-d895-43bd-aecf-89e00bc47f2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 14:13:50.341663 1860210 system_pods.go:61] "kube-scheduler-embed-certs-635679" [a3338c56-f565-4a80-84a5-c776e5b932fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 14:13:50.341673 1860210 system_pods.go:61] "metrics-server-f79f97bbb-mt5gf" [682d32cc-fec1-4a59-b209-e0430fdb9aba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:13:50.341701 1860210 system_pods.go:61] "storage-provisioner" [f1cbcd32-4a98-4100-a973-f4c0e241a76e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 14:13:50.341721 1860210 system_pods.go:74] duration metric: took 13.601769ms to wait for pod list to return data ...
I0127 14:13:50.341734 1860210 node_conditions.go:102] verifying NodePressure condition ...
I0127 14:13:50.351141 1860210 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 14:13:50.351180 1860210 node_conditions.go:123] node cpu capacity is 2
I0127 14:13:50.351196 1860210 node_conditions.go:105] duration metric: took 9.451637ms to run NodePressure ...
I0127 14:13:50.351221 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:13:50.638063 1860210 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0127 14:13:50.644591 1860210 kubeadm.go:739] kubelet initialised
I0127 14:13:50.644623 1860210 kubeadm.go:740] duration metric: took 6.526455ms waiting for restarted kubelet to initialise ...
I0127 14:13:50.644635 1860210 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:13:50.649514 1860210 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace to be "Ready" ...
I0127 14:13:52.657424 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"False"
I0127 14:13:55.156449 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"False"
I0127 14:13:55.657432 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace has status "Ready":"True"
I0127 14:13:55.657455 1860210 pod_ready.go:82] duration metric: took 5.007903814s for pod "coredns-668d6bf9bc-xx6ks" in "kube-system" namespace to be "Ready" ...
I0127 14:13:55.657465 1860210 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:13:57.663788 1860210 pod_ready.go:93] pod "etcd-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:13:57.663817 1860210 pod_ready.go:82] duration metric: took 2.006346137s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:13:57.663832 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:13:59.671160 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:02.170505 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:04.171363 1860210 pod_ready.go:103] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:06.171320 1860210 pod_ready.go:93] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:14:06.171344 1860210 pod_ready.go:82] duration metric: took 8.507503047s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.171355 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.177197 1860210 pod_ready.go:93] pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:14:06.177216 1860210 pod_ready.go:82] duration metric: took 5.855315ms for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.177225 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8cwvc" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.181880 1860210 pod_ready.go:93] pod "kube-proxy-8cwvc" in "kube-system" namespace has status "Ready":"True"
I0127 14:14:06.181903 1860210 pod_ready.go:82] duration metric: took 4.66997ms for pod "kube-proxy-8cwvc" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.181914 1860210 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.186791 1860210 pod_ready.go:93] pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:14:06.186811 1860210 pod_ready.go:82] duration metric: took 4.890146ms for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:14:06.186823 1860210 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" ...
I0127 14:14:08.195701 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:10.694821 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:13.193623 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:15.693213 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:18.192785 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:20.194464 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:22.693253 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:24.694821 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:26.697694 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:29.193107 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:31.194686 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:33.692966 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:36.192603 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:38.192896 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:40.193587 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:42.195047 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:44.195462 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:46.698937 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:49.193562 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:51.194152 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:53.692637 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:55.693047 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:14:57.693793 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:00.193264 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:02.193852 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:04.195528 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:06.693845 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:08.968945 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:11.194005 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:13.693461 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:15.693828 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:17.694630 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:20.193765 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:22.194235 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:24.694860 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:26.694919 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:29.195101 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:31.694467 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:34.194385 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:36.693914 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:38.696081 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:41.197272 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:43.695209 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:46.195123 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:48.693952 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:50.694026 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:53.206371 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:55.695985 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:15:58.195326 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:00.693783 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:03.193681 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:05.693111 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:07.693549 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:10.193399 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:12.193548 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:14.694290 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:17.193461 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:19.693503 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:21.693867 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:23.693979 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:26.193147 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:28.194207 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:30.194278 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:32.195140 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:34.195180 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:36.694560 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:39.193069 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:41.193160 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:43.193961 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:45.194883 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:47.693751 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:50.193423 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:52.693522 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:55.194191 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:57.194327 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:16:59.692261 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:01.693543 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:04.193846 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:06.194219 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:08.692728 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:10.693640 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:12.694404 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:15.193536 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:17.692747 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:19.693588 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:22.194367 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:24.194842 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:26.693271 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:29.198088 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:31.693543 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:34.195013 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:36.693172 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:38.694082 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:41.192529 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:43.194555 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:45.692542 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:47.695942 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:50.194474 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:52.696314 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:55.193437 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:17:57.693084 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:00.192946 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:02.193687 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:04.692411 1860210 pod_ready.go:103] pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:06.187223 1860210 pod_ready.go:82] duration metric: took 4m0.000379978s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" ...
E0127 14:18:06.187264 1860210 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-mt5gf" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 14:18:06.187307 1860210 pod_ready.go:39] duration metric: took 4m15.542651284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:18:06.187351 1860210 kubeadm.go:597] duration metric: took 4m23.277196896s to restartPrimaryControlPlane
W0127 14:18:06.187434 1860210 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 14:18:06.187467 1860210 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 14:18:07.911632 1860210 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.724132799s)
I0127 14:18:07.911722 1860210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 14:18:07.931280 1860210 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 14:18:07.944298 1860210 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 14:18:07.954011 1860210 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 14:18:07.954034 1860210 kubeadm.go:157] found existing configuration files:
I0127 14:18:07.954077 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 14:18:07.963218 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 14:18:07.963275 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 14:18:07.973745 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 14:18:07.982893 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 14:18:07.982960 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 14:18:07.992093 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 14:18:08.001260 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 14:18:08.001322 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 14:18:08.012990 1860210 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 14:18:08.021707 1860210 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 14:18:08.021763 1860210 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 14:18:08.031820 1860210 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 14:18:08.073451 1860210 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 14:18:08.073535 1860210 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 14:18:08.185904 1860210 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 14:18:08.186103 1860210 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 14:18:08.186246 1860210 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 14:18:08.192454 1860210 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 14:18:08.194520 1860210 out.go:235] - Generating certificates and keys ...
I0127 14:18:08.194603 1860210 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 14:18:08.194694 1860210 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 14:18:08.194839 1860210 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 14:18:08.194927 1860210 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 14:18:08.195012 1860210 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 14:18:08.195078 1860210 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 14:18:08.195179 1860210 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 14:18:08.195283 1860210 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 14:18:08.195394 1860210 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 14:18:08.196373 1860210 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 14:18:08.196466 1860210 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 14:18:08.196542 1860210 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 14:18:08.321098 1860210 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 14:18:08.541093 1860210 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 14:18:08.651159 1860210 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 14:18:08.826558 1860210 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 14:18:08.988229 1860210 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 14:18:08.988652 1860210 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 14:18:08.991442 1860210 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 14:18:08.993001 1860210 out.go:235] - Booting up control plane ...
I0127 14:18:08.993138 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 14:18:08.993209 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 14:18:08.994107 1860210 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 14:18:09.014865 1860210 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 14:18:09.020651 1860210 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 14:18:09.020750 1860210 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 14:18:09.151753 1860210 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 14:18:09.151884 1860210 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 14:18:09.653270 1860210 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.611822ms
I0127 14:18:09.653382 1860210 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 14:18:17.655587 1860210 kubeadm.go:310] [api-check] The API server is healthy after 8.002072671s
I0127 14:18:17.668708 1860210 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 14:18:17.682413 1860210 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 14:18:17.704713 1860210 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 14:18:17.704968 1860210 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-635679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 14:18:17.713080 1860210 kubeadm.go:310] [bootstrap-token] Using token: hphos4.59px2lq9c4g168m4
I0127 14:18:17.714344 1860210 out.go:235] - Configuring RBAC rules ...
I0127 14:18:17.714512 1860210 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 14:18:17.721371 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 14:18:17.727820 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 14:18:17.731000 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 14:18:17.733786 1860210 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 14:18:17.736631 1860210 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 14:18:18.062788 1860210 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 14:18:18.485209 1860210 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 14:18:19.063817 1860210 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 14:18:19.065385 1860210 kubeadm.go:310]
I0127 14:18:19.065503 1860210 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 14:18:19.065516 1860210 kubeadm.go:310]
I0127 14:18:19.065665 1860210 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 14:18:19.065689 1860210 kubeadm.go:310]
I0127 14:18:19.065721 1860210 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 14:18:19.065806 1860210 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 14:18:19.065900 1860210 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 14:18:19.065916 1860210 kubeadm.go:310]
I0127 14:18:19.065998 1860210 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 14:18:19.066007 1860210 kubeadm.go:310]
I0127 14:18:19.066075 1860210 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 14:18:19.066089 1860210 kubeadm.go:310]
I0127 14:18:19.066154 1860210 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 14:18:19.066260 1860210 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 14:18:19.066381 1860210 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 14:18:19.066401 1860210 kubeadm.go:310]
I0127 14:18:19.066518 1860210 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 14:18:19.066627 1860210 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 14:18:19.066638 1860210 kubeadm.go:310]
I0127 14:18:19.066782 1860210 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hphos4.59px2lq9c4g168m4 \
I0127 14:18:19.066929 1860210 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
I0127 14:18:19.066973 1860210 kubeadm.go:310] --control-plane
I0127 14:18:19.066984 1860210 kubeadm.go:310]
I0127 14:18:19.067112 1860210 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 14:18:19.067124 1860210 kubeadm.go:310]
I0127 14:18:19.067244 1860210 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hphos4.59px2lq9c4g168m4 \
I0127 14:18:19.067390 1860210 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e
I0127 14:18:19.067997 1860210 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 14:18:19.068048 1860210 cni.go:84] Creating CNI manager for ""
I0127 14:18:19.068068 1860210 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:18:19.070005 1860210 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 14:18:19.071444 1860210 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 14:18:19.083641 1860210 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 14:18:19.106274 1860210 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 14:18:19.106345 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:19.106355 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-635679 minikube.k8s.io/updated_at=2025_01_27T14_18_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=embed-certs-635679 minikube.k8s.io/primary=true
I0127 14:18:19.138908 1860210 ops.go:34] apiserver oom_adj: -16
I0127 14:18:19.335635 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:19.836673 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:20.336363 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:20.836633 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:21.336621 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:21.835710 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:22.336249 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:22.835985 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:23.335802 1860210 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:18:23.461630 1860210 kubeadm.go:1113] duration metric: took 4.355337127s to wait for elevateKubeSystemPrivileges
I0127 14:18:23.461686 1860210 kubeadm.go:394] duration metric: took 4m40.616696193s to StartCluster
I0127 14:18:23.461716 1860210 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:18:23.461811 1860210 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:18:23.463618 1860210 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:18:23.464255 1860210 config.go:182] Loaded profile config "embed-certs-635679": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:18:23.464387 1860210 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 14:18:23.464492 1860210 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-635679"
I0127 14:18:23.464512 1860210 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-635679"
W0127 14:18:23.464525 1860210 addons.go:247] addon storage-provisioner should already be in state true
I0127 14:18:23.464561 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
I0127 14:18:23.464992 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.465036 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.465118 1860210 addons.go:69] Setting default-storageclass=true in profile "embed-certs-635679"
I0127 14:18:23.465161 1860210 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-635679"
I0127 14:18:23.465260 1860210 addons.go:69] Setting dashboard=true in profile "embed-certs-635679"
I0127 14:18:23.465281 1860210 addons.go:238] Setting addon dashboard=true in "embed-certs-635679"
W0127 14:18:23.465290 1860210 addons.go:247] addon dashboard should already be in state true
I0127 14:18:23.465318 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
I0127 14:18:23.465505 1860210 addons.go:69] Setting metrics-server=true in profile "embed-certs-635679"
I0127 14:18:23.465529 1860210 addons.go:238] Setting addon metrics-server=true in "embed-certs-635679"
W0127 14:18:23.465537 1860210 addons.go:247] addon metrics-server should already be in state true
I0127 14:18:23.465577 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
I0127 14:18:23.465620 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.465655 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.465703 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.465737 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.468726 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.468782 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.464353 1860210 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 14:18:23.472272 1860210 out.go:177] * Verifying Kubernetes components...
I0127 14:18:23.473717 1860210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:18:23.486905 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
I0127 14:18:23.487573 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.488533 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.488564 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.489646 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.492416 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.492479 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.494948 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
I0127 14:18:23.498090 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.499354 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.499372 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.499777 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.501693 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:18:23.507108 1860210 addons.go:238] Setting addon default-storageclass=true in "embed-certs-635679"
W0127 14:18:23.507133 1860210 addons.go:247] addon default-storageclass should already be in state true
I0127 14:18:23.507169 1860210 host.go:66] Checking if "embed-certs-635679" exists ...
I0127 14:18:23.507561 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.507596 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.507842 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
I0127 14:18:23.508334 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.508998 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.509028 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.509419 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.509702 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34095
I0127 14:18:23.510237 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.510277 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.510654 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
I0127 14:18:23.510992 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.511486 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.511540 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.511559 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.511969 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.512065 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.512083 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.512416 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.512492 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:18:23.513009 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.513061 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.515440 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:18:23.517429 1860210 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 14:18:23.518694 1860210 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:18:23.518719 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 14:18:23.518762 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:18:23.522925 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.523432 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:18:23.523476 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.523800 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:18:23.524027 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:18:23.524224 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:18:23.524363 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:18:23.527536 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
I0127 14:18:23.528108 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.528643 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.528663 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.529143 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.529762 1860210 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:23.529807 1860210 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:23.538135 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35569
I0127 14:18:23.538598 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.539117 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.539136 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.539501 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.539694 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:18:23.541494 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:18:23.543428 1860210 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 14:18:23.544588 1860210 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 14:18:23.544972 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
I0127 14:18:23.545476 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.545705 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 14:18:23.545726 1860210 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 14:18:23.545748 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:18:23.546012 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.546035 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.546451 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.546625 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:18:23.548509 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:18:23.549640 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.550013 1860210 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 14:18:23.550215 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:18:23.550237 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.550507 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:18:23.550727 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:18:23.550982 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:18:23.551131 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:18:23.551683 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 14:18:23.551699 1860210 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 14:18:23.551714 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:18:23.554782 1860210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
I0127 14:18:23.555098 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.555841 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:18:23.555993 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:18:23.555996 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:18:23.556008 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.556074 1860210 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:23.556171 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:18:23.556323 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:18:23.556582 1860210 main.go:141] libmachine: Using API Version 1
I0127 14:18:23.556602 1860210 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:23.557022 1860210 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:23.557197 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetState
I0127 14:18:23.558779 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .DriverName
I0127 14:18:23.559006 1860210 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 14:18:23.559020 1860210 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 14:18:23.559039 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHHostname
I0127 14:18:23.562487 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.562891 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:cf:47", ip: ""} in network mk-embed-certs-635679: {Iface:virbr3 ExpiryTime:2025-01-27 15:13:32 +0000 UTC Type:0 Mac:52:54:00:84:cf:47 Iaid: IPaddr:192.168.61.180 Prefix:24 Hostname:embed-certs-635679 Clientid:01:52:54:00:84:cf:47}
I0127 14:18:23.562925 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | domain embed-certs-635679 has defined IP address 192.168.61.180 and MAC address 52:54:00:84:cf:47 in network mk-embed-certs-635679
I0127 14:18:23.563172 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHPort
I0127 14:18:23.563357 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHKeyPath
I0127 14:18:23.563516 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .GetSSHUsername
I0127 14:18:23.563641 1860210 sshutil.go:53] new ssh client: &{IP:192.168.61.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/embed-certs-635679/id_rsa Username:docker}
I0127 14:18:23.757691 1860210 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 14:18:23.782030 1860210 node_ready.go:35] waiting up to 6m0s for node "embed-certs-635679" to be "Ready" ...
I0127 14:18:23.817711 1860210 node_ready.go:49] node "embed-certs-635679" has status "Ready":"True"
I0127 14:18:23.817741 1860210 node_ready.go:38] duration metric: took 35.669892ms for node "embed-certs-635679" to be "Ready" ...
I0127 14:18:23.817752 1860210 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:18:23.859312 1860210 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace to be "Ready" ...
I0127 14:18:23.889570 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 14:18:23.894297 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 14:18:23.894322 1860210 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 14:18:23.961705 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 14:18:23.961741 1860210 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 14:18:23.980733 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:18:24.000036 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 14:18:24.000069 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 14:18:24.014883 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:18:24.014916 1860210 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:18:24.046102 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 14:18:24.046137 1860210 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 14:18:24.084833 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:18:24.084873 1860210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 14:18:24.149628 1860210 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:18:24.149663 1860210 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 14:18:24.254695 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:18:24.289523 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:18:24.289558 1860210 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:18:24.398702 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:18:24.398835 1860210 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:18:24.441678 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:24.441738 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:24.442877 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:24.442908 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:24.442961 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:24.442981 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:24.443016 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:24.443437 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:24.443453 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:24.443509 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:24.467985 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:24.468017 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:24.468404 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:24.468469 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:24.520080 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:18:24.520127 1860210 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:18:24.566543 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:18:24.566583 1860210 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:18:24.694053 1860210 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:18:24.694088 1860210 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:18:24.797378 1860210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:18:25.171642 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.19083972s)
I0127 14:18:25.171700 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:25.171712 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:25.172020 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:25.173376 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:25.173397 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:25.173415 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:25.173425 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:25.173721 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:25.173726 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:25.173783 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:25.469119 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.214292891s)
I0127 14:18:25.469195 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:25.469216 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:25.469532 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:25.469545 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:25.469562 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:25.469573 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:25.469581 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:25.469925 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:25.469946 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:25.469960 1860210 addons.go:479] Verifying addon metrics-server=true in "embed-certs-635679"
I0127 14:18:25.866472 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:26.857958 1860210 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.06051743s)
I0127 14:18:26.858077 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:26.858099 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:26.858508 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:26.858535 1860210 main.go:141] libmachine: (embed-certs-635679) DBG | Closing plugin on server side
I0127 14:18:26.858543 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:26.858557 1860210 main.go:141] libmachine: Making call to close driver server
I0127 14:18:26.858564 1860210 main.go:141] libmachine: (embed-certs-635679) Calling .Close
I0127 14:18:26.859006 1860210 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:26.859020 1860210 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:26.860592 1860210 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-635679 addons enable metrics-server
I0127 14:18:26.861892 1860210 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 14:18:26.863115 1860210 addons.go:514] duration metric: took 3.398732326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 14:18:28.369038 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:28.369069 1860210 pod_ready.go:82] duration metric: took 4.509722512s for pod "coredns-668d6bf9bc-52k8k" in "kube-system" namespace to be "Ready" ...
I0127 14:18:28.369083 1860210 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace to be "Ready" ...
I0127 14:18:30.378207 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:32.845308 1860210 pod_ready.go:103] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:34.383070 1860210 pod_ready.go:93] pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.383099 1860210 pod_ready.go:82] duration metric: took 6.014008774s for pod "coredns-668d6bf9bc-vn9c5" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.383110 1860210 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.418534 1860210 pod_ready.go:93] pod "etcd-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.418566 1860210 pod_ready.go:82] duration metric: took 35.44003ms for pod "etcd-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.418579 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.444912 1860210 pod_ready.go:93] pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.444937 1860210 pod_ready.go:82] duration metric: took 26.350357ms for pod "kube-apiserver-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.444948 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.455394 1860210 pod_ready.go:93] pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.455417 1860210 pod_ready.go:82] duration metric: took 10.46086ms for pod "kube-controller-manager-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.455430 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2hsk" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.467062 1860210 pod_ready.go:93] pod "kube-proxy-k2hsk" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.467097 1860210 pod_ready.go:82] duration metric: took 11.657705ms for pod "kube-proxy-k2hsk" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.467111 1860210 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.774042 1860210 pod_ready.go:93] pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:34.774078 1860210 pod_ready.go:82] duration metric: took 306.957006ms for pod "kube-scheduler-embed-certs-635679" in "kube-system" namespace to be "Ready" ...
I0127 14:18:34.774099 1860210 pod_ready.go:39] duration metric: took 10.9563322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:18:34.774123 1860210 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:18:34.774200 1860210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:18:34.791682 1860210 api_server.go:72] duration metric: took 11.322661462s to wait for apiserver process to appear ...
I0127 14:18:34.791712 1860210 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:18:34.791737 1860210 api_server.go:253] Checking apiserver healthz at https://192.168.61.180:8443/healthz ...
I0127 14:18:34.796797 1860210 api_server.go:279] https://192.168.61.180:8443/healthz returned 200:
ok
I0127 14:18:34.798034 1860210 api_server.go:141] control plane version: v1.32.1
I0127 14:18:34.798065 1860210 api_server.go:131] duration metric: took 6.344197ms to wait for apiserver health ...
I0127 14:18:34.798075 1860210 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:18:34.979734 1860210 system_pods.go:59] 9 kube-system pods found
I0127 14:18:34.979775 1860210 system_pods.go:61] "coredns-668d6bf9bc-52k8k" [b4744653-9cf8-4fda-a7d5-85bba4da019f] Running
I0127 14:18:34.979795 1860210 system_pods.go:61] "coredns-668d6bf9bc-vn9c5" [50b23903-1e83-4fbc-b1b9-101a646663c5] Running
I0127 14:18:34.979801 1860210 system_pods.go:61] "etcd-embed-certs-635679" [7d89dace-a11c-4983-b4ca-80b29d020f4b] Running
I0127 14:18:34.979806 1860210 system_pods.go:61] "kube-apiserver-embed-certs-635679" [66c0f79b-d0c6-4f3d-9694-02509dd94348] Running
I0127 14:18:34.979812 1860210 system_pods.go:61] "kube-controller-manager-embed-certs-635679" [63e7d07f-b74b-461a-9a1a-0a9adc3ecb40] Running
I0127 14:18:34.979817 1860210 system_pods.go:61] "kube-proxy-k2hsk" [a0d30935-bb79-44b5-b061-3b6fcc12ae42] Running
I0127 14:18:34.979821 1860210 system_pods.go:61] "kube-scheduler-embed-certs-635679" [ca49b72b-d7a3-4f81-9c1d-fa1cc176387c] Running
I0127 14:18:34.979830 1860210 system_pods.go:61] "metrics-server-f79f97bbb-7xqnn" [2fae80e8-5118-461e-b160-d384bf083f0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:18:34.979840 1860210 system_pods.go:61] "storage-provisioner" [0bdc72ce-c65f-4aca-b113-eff101fc04ad] Running
I0127 14:18:34.979851 1860210 system_pods.go:74] duration metric: took 181.768087ms to wait for pod list to return data ...
I0127 14:18:34.979870 1860210 default_sa.go:34] waiting for default service account to be created ...
I0127 14:18:35.174207 1860210 default_sa.go:45] found service account: "default"
I0127 14:18:35.174246 1860210 default_sa.go:55] duration metric: took 194.367344ms for default service account to be created ...
I0127 14:18:35.174261 1860210 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 14:18:35.377677 1860210 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-635679 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-635679 -n embed-certs-635679
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p embed-certs-635679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-635679 logs -n 25: (1.191313853s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| stop | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:12 UTC | 27 Jan 25 14:14 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-635679 | embed-certs-635679 | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-635679 | embed-certs-635679 | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p no-preload-591346 | no-preload-591346 | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-591346 | no-preload-591346 | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-212529 | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-212529 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | |
| | default-k8s-diff-port-212529 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:17 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | old-k8s-version-908018 image | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
| delete | -p old-k8s-version-908018 | old-k8s-version-908018 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
| start | -p newest-cni-309688 --memory=2200 --alsologtostderr | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:18 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:18 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-309688 --memory=2200 --alsologtostderr | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:18 UTC | 27 Jan 25 14:19 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | newest-cni-309688 image list | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
| delete | -p newest-cni-309688 | newest-cni-309688 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
| delete | -p no-preload-591346 | no-preload-591346 | jenkins | v1.35.0 | 27 Jan 25 14:40 UTC | 27 Jan 25 14:40 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 14:18:41
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 14:18:41.854015 1863329 out.go:345] Setting OutFile to fd 1 ...
I0127 14:18:41.854179 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:18:41.854190 1863329 out.go:358] Setting ErrFile to fd 2...
I0127 14:18:41.854197 1863329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:18:41.854387 1863329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-1798877/.minikube/bin
I0127 14:18:41.855024 1863329 out.go:352] Setting JSON to false
I0127 14:18:41.856109 1863329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":39663,"bootTime":1737947859,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 14:18:41.856224 1863329 start.go:139] virtualization: kvm guest
I0127 14:18:41.858116 1863329 out.go:177] * [newest-cni-309688] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 14:18:41.859411 1863329 notify.go:220] Checking for updates...
I0127 14:18:41.859457 1863329 out.go:177] - MINIKUBE_LOCATION=20327
I0127 14:18:41.860616 1863329 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 14:18:41.861927 1863329 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:18:41.863092 1863329 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-1798877/.minikube
I0127 14:18:41.864171 1863329 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 14:18:41.865251 1863329 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 14:18:41.866889 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:18:41.867384 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:41.867442 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:41.883915 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
I0127 14:18:41.884516 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:41.885154 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:18:41.885177 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:41.885640 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:41.885855 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:18:41.886202 1863329 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 14:18:41.886661 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:41.886728 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:41.904702 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
I0127 14:18:41.905242 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:41.905789 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:18:41.905815 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:41.906241 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:41.906460 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:18:41.947119 1863329 out.go:177] * Using the kvm2 driver based on existing profile
I0127 14:18:41.948433 1863329 start.go:297] selected driver: kvm2
I0127 14:18:41.948449 1863329 start.go:901] validating driver "kvm2" against &{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:18:41.948615 1863329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 14:18:41.949339 1863329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:18:41.949417 1863329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-1798877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 14:18:41.966476 1863329 install.go:137] /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
I0127 14:18:41.966978 1863329 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 14:18:41.967016 1863329 cni.go:84] Creating CNI manager for ""
I0127 14:18:41.967062 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:18:41.967095 1863329 start.go:340] cluster config:
{Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:18:41.967211 1863329 iso.go:125] acquiring lock: {Name:mk3326e4e64b9d95edc1453384276c21a2957c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:18:41.969136 1863329 out.go:177] * Starting "newest-cni-309688" primary control-plane node in "newest-cni-309688" cluster
I0127 14:18:41.970047 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 14:18:41.970083 1863329 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 14:18:41.970090 1863329 cache.go:56] Caching tarball of preloaded images
I0127 14:18:41.970203 1863329 preload.go:172] Found /home/jenkins/minikube-integration/20327-1798877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 14:18:41.970215 1863329 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 14:18:41.970348 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
I0127 14:18:41.970570 1863329 start.go:360] acquireMachinesLock for newest-cni-309688: {Name:mk6fcac41a7a21b211b65e56994e625852d1a781 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 14:18:41.970626 1863329 start.go:364] duration metric: took 32.288µs to acquireMachinesLock for "newest-cni-309688"
I0127 14:18:41.970646 1863329 start.go:96] Skipping create...Using existing machine configuration
I0127 14:18:41.970657 1863329 fix.go:54] fixHost starting:
I0127 14:18:41.971072 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:18:41.971127 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:18:41.987333 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
I0127 14:18:41.987957 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:18:41.988457 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:18:41.988482 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:18:41.988963 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:18:41.989252 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:18:41.989407 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:18:41.991188 1863329 fix.go:112] recreateIfNeeded on newest-cni-309688: state=Stopped err=<nil>
I0127 14:18:41.991220 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
W0127 14:18:41.991396 1863329 fix.go:138] unexpected machine state, will restart: <nil>
I0127 14:18:41.993400 1863329 out.go:177] * Restarting existing kvm2 VM for "newest-cni-309688" ...
I0127 14:18:39.739774 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:18:39.739799 1860441 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:18:39.776579 1860441 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:18:39.776612 1860441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 14:18:39.821641 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:18:39.821669 1860441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 14:18:39.837528 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:18:39.899562 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:18:39.899592 1860441 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:18:39.941841 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:18:39.941883 1860441 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:18:39.958020 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:18:39.958049 1860441 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:18:39.985706 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:18:39.985733 1860441 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:18:40.018166 1860441 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:18:40.018198 1860441 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:18:40.049338 1860441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:18:40.335449 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.335486 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.335522 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.335544 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.335886 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.335906 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.335921 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.335932 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.335939 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.335940 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.336011 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.336058 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.336071 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.336079 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.336199 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.336202 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.336210 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.336321 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.336339 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.361215 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.361236 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.361528 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.361572 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.361588 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.976702 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.139130092s)
I0127 14:18:40.976753 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.976768 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.977190 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.977233 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.977244 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.977254 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:40.977278 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:40.977544 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:40.977626 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:40.977659 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:40.977685 1860441 addons.go:479] Verifying addon metrics-server=true in "no-preload-591346"
I0127 14:18:41.537877 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:41.993401 1860441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.943993844s)
I0127 14:18:41.993457 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:41.993474 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:41.993713 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:41.993737 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:41.993755 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:41.993778 1860441 main.go:141] libmachine: Making call to close driver server
I0127 14:18:41.993785 1860441 main.go:141] libmachine: (no-preload-591346) Calling .Close
I0127 14:18:41.994133 1860441 main.go:141] libmachine: (no-preload-591346) DBG | Closing plugin on server side
I0127 14:18:41.994158 1860441 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:18:41.994172 1860441 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:18:41.995251 1860441 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-591346 addons enable metrics-server
I0127 14:18:41.996556 1860441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 14:18:41.997692 1860441 addons.go:514] duration metric: took 2.74201161s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 14:18:43.539748 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:40.906503 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:42.906895 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:45.405827 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:41.996357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Start
I0127 14:18:41.996613 1863329 main.go:141] libmachine: (newest-cni-309688) starting domain...
I0127 14:18:41.996630 1863329 main.go:141] libmachine: (newest-cni-309688) ensuring networks are active...
I0127 14:18:41.997620 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network default is active
I0127 14:18:41.998106 1863329 main.go:141] libmachine: (newest-cni-309688) Ensuring network mk-newest-cni-309688 is active
I0127 14:18:41.998535 1863329 main.go:141] libmachine: (newest-cni-309688) getting domain XML...
I0127 14:18:41.999349 1863329 main.go:141] libmachine: (newest-cni-309688) creating domain...
I0127 14:18:43.362085 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for IP...
I0127 14:18:43.363264 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:43.363792 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:43.363901 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.363777 1863364 retry.go:31] will retry after 245.978549ms: waiting for domain to come up
I0127 14:18:43.611613 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:43.612280 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:43.612314 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.612267 1863364 retry.go:31] will retry after 277.473907ms: waiting for domain to come up
I0127 14:18:43.891925 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:43.892577 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:43.892608 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:43.892527 1863364 retry.go:31] will retry after 327.737062ms: waiting for domain to come up
I0127 14:18:44.221804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:44.222337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:44.222385 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.222298 1863364 retry.go:31] will retry after 472.286938ms: waiting for domain to come up
I0127 14:18:44.695804 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:44.696473 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:44.696498 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:44.696438 1863364 retry.go:31] will retry after 556.965256ms: waiting for domain to come up
I0127 14:18:45.254693 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:45.255242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:45.255276 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:45.255189 1863364 retry.go:31] will retry after 809.038394ms: waiting for domain to come up
I0127 14:18:46.066036 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:46.066585 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:46.066616 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.066540 1863364 retry.go:31] will retry after 758.303359ms: waiting for domain to come up
I0127 14:18:46.826373 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:46.826997 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:46.827029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:46.826933 1863364 retry.go:31] will retry after 1.102767077s: waiting for domain to come up
I0127 14:18:46.040082 1860441 pod_ready.go:103] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:47.537709 1860441 pod_ready.go:93] pod "etcd-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:47.537735 1860441 pod_ready.go:82] duration metric: took 8.005981983s for pod "etcd-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.537745 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.545174 1860441 pod_ready.go:93] pod "kube-apiserver-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:47.545199 1860441 pod_ready.go:82] duration metric: took 7.447836ms for pod "kube-apiserver-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.545210 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.564920 1860441 pod_ready.go:93] pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:47.564957 1860441 pod_ready.go:82] duration metric: took 19.735587ms for pod "kube-controller-manager-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.564973 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.588782 1860441 pod_ready.go:93] pod "kube-proxy-k69dv" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:47.588811 1860441 pod_ready.go:82] duration metric: took 23.829861ms for pod "kube-proxy-k69dv" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.588824 1860441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.598620 1860441 pod_ready.go:93] pod "kube-scheduler-no-preload-591346" in "kube-system" namespace has status "Ready":"True"
I0127 14:18:47.598656 1860441 pod_ready.go:82] duration metric: took 9.822306ms for pod "kube-scheduler-no-preload-591346" in "kube-system" namespace to be "Ready" ...
I0127 14:18:47.598668 1860441 pod_ready.go:39] duration metric: took 8.076081083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:18:47.598693 1860441 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:18:47.598793 1860441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:18:47.615862 1860441 api_server.go:72] duration metric: took 8.36019503s to wait for apiserver process to appear ...
I0127 14:18:47.615895 1860441 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:18:47.615918 1860441 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0127 14:18:47.631872 1860441 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
ok
I0127 14:18:47.632742 1860441 api_server.go:141] control plane version: v1.32.1
I0127 14:18:47.632766 1860441 api_server.go:131] duration metric: took 16.863539ms to wait for apiserver health ...
I0127 14:18:47.632774 1860441 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:18:47.739770 1860441 system_pods.go:59] 9 kube-system pods found
I0127 14:18:47.739814 1860441 system_pods.go:61] "coredns-668d6bf9bc-cm66w" [97ffe415-a70c-44a4-aa07-5b99576c749d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:18:47.739824 1860441 system_pods.go:61] "coredns-668d6bf9bc-lq9hg" [688b4191-8c28-440b-bc93-d52964fe105c] Running
I0127 14:18:47.739833 1860441 system_pods.go:61] "etcd-no-preload-591346" [01ae260c-cbf6-4f04-be4e-565f3f408c45] Running
I0127 14:18:47.739838 1860441 system_pods.go:61] "kube-apiserver-no-preload-591346" [1433350f-5302-42e1-8763-0f8bbde34676] Running
I0127 14:18:47.739842 1860441 system_pods.go:61] "kube-controller-manager-no-preload-591346" [49eab0a5-09c9-4a2d-9913-1b45c145b52a] Running
I0127 14:18:47.739846 1860441 system_pods.go:61] "kube-proxy-k69dv" [393d6681-7d87-479a-94d3-5ff6cbfe16ed] Running
I0127 14:18:47.739849 1860441 system_pods.go:61] "kube-scheduler-no-preload-591346" [9f5af2ad-71a3-4481-a18a-8477f843553a] Running
I0127 14:18:47.739855 1860441 system_pods.go:61] "metrics-server-f79f97bbb-fqckz" [30644e2b-7988-4b55-aa94-fe774b820ed4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:18:47.739859 1860441 system_pods.go:61] "storage-provisioner" [f10d2d4c-7f96-4ff6-b6ae-71b7918fd3ee] Running
I0127 14:18:47.739866 1860441 system_pods.go:74] duration metric: took 107.08564ms to wait for pod list to return data ...
I0127 14:18:47.739874 1860441 default_sa.go:34] waiting for default service account to be created ...
I0127 14:18:47.936494 1860441 default_sa.go:45] found service account: "default"
I0127 14:18:47.936524 1860441 default_sa.go:55] duration metric: took 196.641742ms for default service account to be created ...
I0127 14:18:47.936536 1860441 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 14:18:48.139726 1860441 system_pods.go:87] 9 kube-system pods found
I0127 14:18:47.405959 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:49.408149 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:47.931337 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:47.931793 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:47.931838 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:47.931776 1863364 retry.go:31] will retry after 1.120510293s: waiting for domain to come up
I0127 14:18:49.053548 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:49.054204 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:49.054231 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:49.054156 1863364 retry.go:31] will retry after 1.733549309s: waiting for domain to come up
I0127 14:18:50.790083 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:50.790567 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:50.790650 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:50.790566 1863364 retry.go:31] will retry after 1.990202359s: waiting for domain to come up
I0127 14:18:51.906048 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:53.906496 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:52.782229 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:52.782850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:52.782892 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:52.782738 1863364 retry.go:31] will retry after 2.327681841s: waiting for domain to come up
I0127 14:18:55.113291 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:55.113832 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:55.113864 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:55.113778 1863364 retry.go:31] will retry after 3.526138042s: waiting for domain to come up
I0127 14:18:55.906587 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:58.405047 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:18:58.641406 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:18:58.642022 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | unable to find current IP address of domain newest-cni-309688 in network mk-newest-cni-309688
I0127 14:18:58.642056 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | I0127 14:18:58.641994 1863364 retry.go:31] will retry after 5.217691775s: waiting for domain to come up
I0127 14:19:00.906487 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:03.405134 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:05.405708 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:03.862320 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.862779 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has current primary IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.862804 1863329 main.go:141] libmachine: (newest-cni-309688) found domain IP: 192.168.72.17
I0127 14:19:03.862815 1863329 main.go:141] libmachine: (newest-cni-309688) reserving static IP address...
I0127 14:19:03.863295 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:03.863323 1863329 main.go:141] libmachine: (newest-cni-309688) reserved static IP address 192.168.72.17 for domain newest-cni-309688
I0127 14:19:03.863342 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | skip adding static IP to network mk-newest-cni-309688 - found existing host DHCP lease matching {name: "newest-cni-309688", mac: "52:54:00:1b:25:ab", ip: "192.168.72.17"}
I0127 14:19:03.863372 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Getting to WaitForSSH function...
I0127 14:19:03.863389 1863329 main.go:141] libmachine: (newest-cni-309688) waiting for SSH...
I0127 14:19:03.865894 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.866214 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:03.866242 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.866399 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH client type: external
I0127 14:19:03.866428 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa (-rw-------)
I0127 14:19:03.866460 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 14:19:03.866485 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | About to run SSH command:
I0127 14:19:03.866510 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | exit 0
I0127 14:19:03.986391 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | SSH cmd err, output: <nil>:
I0127 14:19:03.986778 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetConfigRaw
I0127 14:19:03.987411 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
I0127 14:19:03.990205 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.990686 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:03.990714 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.990989 1863329 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/config.json ...
I0127 14:19:03.991197 1863329 machine.go:93] provisionDockerMachine start ...
I0127 14:19:03.991218 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:03.991433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:03.993663 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.993956 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:03.994002 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:03.994179 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:03.994359 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:03.994518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:03.994653 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:03.994863 1863329 main.go:141] libmachine: Using SSH client type: native
I0127 14:19:03.995069 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.17 22 <nil> <nil>}
I0127 14:19:03.995080 1863329 main.go:141] libmachine: About to run SSH command:
hostname
I0127 14:19:04.094835 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 14:19:04.094866 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
I0127 14:19:04.095102 1863329 buildroot.go:166] provisioning hostname "newest-cni-309688"
I0127 14:19:04.095129 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
I0127 14:19:04.095318 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.097835 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.098248 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.098281 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.098404 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.098576 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.098735 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.098905 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.099088 1863329 main.go:141] libmachine: Using SSH client type: native
I0127 14:19:04.099267 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.17 22 <nil> <nil>}
I0127 14:19:04.099282 1863329 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-309688 && echo "newest-cni-309688" | sudo tee /etc/hostname
I0127 14:19:04.213036 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-309688
I0127 14:19:04.213082 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.215824 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.216184 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.216208 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.216357 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.216549 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.216671 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.216793 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.216979 1863329 main.go:141] libmachine: Using SSH client type: native
I0127 14:19:04.217204 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.17 22 <nil> <nil>}
I0127 14:19:04.217230 1863329 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-309688' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-309688/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-309688' | sudo tee -a /etc/hosts;
fi
fi
I0127 14:19:04.329169 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 14:19:04.329206 1863329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-1798877/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-1798877/.minikube}
I0127 14:19:04.329248 1863329 buildroot.go:174] setting up certificates
I0127 14:19:04.329259 1863329 provision.go:84] configureAuth start
I0127 14:19:04.329269 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetMachineName
I0127 14:19:04.329540 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
I0127 14:19:04.332411 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.332850 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.332878 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.333078 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.335728 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.336143 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.336174 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.336351 1863329 provision.go:143] copyHostCerts
I0127 14:19:04.336415 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem, removing ...
I0127 14:19:04.336439 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem
I0127 14:19:04.336527 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.pem (1078 bytes)
I0127 14:19:04.336664 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem, removing ...
I0127 14:19:04.336677 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem
I0127 14:19:04.336718 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/cert.pem (1123 bytes)
I0127 14:19:04.336806 1863329 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem, removing ...
I0127 14:19:04.336817 1863329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem
I0127 14:19:04.336852 1863329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-1798877/.minikube/key.pem (1675 bytes)
I0127 14:19:04.336995 1863329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem org=jenkins.newest-cni-309688 san=[127.0.0.1 192.168.72.17 localhost minikube newest-cni-309688]
I0127 14:19:04.445795 1863329 provision.go:177] copyRemoteCerts
I0127 14:19:04.445894 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 14:19:04.445928 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.448735 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.449074 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.449106 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.449317 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.449501 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.449677 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.449816 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:04.528783 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 14:19:04.552897 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 14:19:04.575992 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 14:19:04.598152 1863329 provision.go:87] duration metric: took 268.879651ms to configureAuth
I0127 14:19:04.598183 1863329 buildroot.go:189] setting minikube options for container-runtime
I0127 14:19:04.598397 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:19:04.598411 1863329 machine.go:96] duration metric: took 607.201271ms to provisionDockerMachine
I0127 14:19:04.598421 1863329 start.go:293] postStartSetup for "newest-cni-309688" (driver="kvm2")
I0127 14:19:04.598437 1863329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 14:19:04.598481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:04.598842 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 14:19:04.598874 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.601257 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.601599 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.601628 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.601759 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.601945 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.602093 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.602260 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:04.685084 1863329 ssh_runner.go:195] Run: cat /etc/os-release
I0127 14:19:04.689047 1863329 info.go:137] Remote host: Buildroot 2023.02.9
I0127 14:19:04.689081 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/addons for local assets ...
I0127 14:19:04.689137 1863329 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-1798877/.minikube/files for local assets ...
I0127 14:19:04.689212 1863329 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem -> 18060702.pem in /etc/ssl/certs
I0127 14:19:04.689300 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 14:19:04.698109 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /etc/ssl/certs/18060702.pem (1708 bytes)
I0127 14:19:04.723269 1863329 start.go:296] duration metric: took 124.828224ms for postStartSetup
I0127 14:19:04.723315 1863329 fix.go:56] duration metric: took 22.752659687s for fixHost
I0127 14:19:04.723339 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.726123 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.726570 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.726601 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.726820 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.727042 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.727229 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.727405 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.727627 1863329 main.go:141] libmachine: Using SSH client type: native
I0127 14:19:04.727869 1863329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.17 22 <nil> <nil>}
I0127 14:19:04.727885 1863329 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 14:19:04.831094 1863329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987544.794055340
I0127 14:19:04.831118 1863329 fix.go:216] guest clock: 1737987544.794055340
I0127 14:19:04.831124 1863329 fix.go:229] Guest: 2025-01-27 14:19:04.79405534 +0000 UTC Remote: 2025-01-27 14:19:04.723319581 +0000 UTC m=+22.912787075 (delta=70.735759ms)
I0127 14:19:04.831145 1863329 fix.go:200] guest clock delta is within tolerance: 70.735759ms
I0127 14:19:04.831149 1863329 start.go:83] releasing machines lock for "newest-cni-309688", held for 22.860512585s
I0127 14:19:04.831167 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:04.831433 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
I0127 14:19:04.834349 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.834694 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.834718 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.834915 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:04.835447 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:04.835626 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:04.835729 1863329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 14:19:04.835772 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.835799 1863329 ssh_runner.go:195] Run: cat /version.json
I0127 14:19:04.835821 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:04.838501 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.838695 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.838855 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.838881 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.839077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.839082 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:04.839117 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:04.839262 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:04.839272 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.839481 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:04.839482 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.839635 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:04.839648 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:04.839742 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:04.942379 1863329 ssh_runner.go:195] Run: systemctl --version
I0127 14:19:04.948168 1863329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 14:19:04.953645 1863329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 14:19:04.953703 1863329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 14:19:04.969617 1863329 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 14:19:04.969646 1863329 start.go:495] detecting cgroup driver to use...
I0127 14:19:04.969742 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 14:19:05.001151 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 14:19:05.014859 1863329 docker.go:217] disabling cri-docker service (if available) ...
I0127 14:19:05.014928 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 14:19:05.030145 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 14:19:05.044008 1863329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 14:19:05.174941 1863329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 14:19:05.330526 1863329 docker.go:233] disabling docker service ...
I0127 14:19:05.330619 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 14:19:05.345183 1863329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 14:19:05.357628 1863329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 14:19:05.474635 1863329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 14:19:05.587063 1863329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 14:19:05.600224 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 14:19:05.616896 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 14:19:05.628539 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 14:19:05.639531 1863329 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 14:19:05.639605 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 14:19:05.649978 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 14:19:05.659986 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 14:19:05.669665 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 14:19:05.680018 1863329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 14:19:05.690041 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 14:19:05.699586 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 14:19:05.709482 1863329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 14:19:05.719643 1863329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 14:19:05.728454 1863329 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 14:19:05.728520 1863329 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 14:19:05.743292 1863329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 14:19:05.752875 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:19:05.862682 1863329 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 14:19:05.897001 1863329 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 14:19:05.897074 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 14:19:05.901946 1863329 retry.go:31] will retry after 1.257073282s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 14:19:07.159917 1863329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 14:19:07.165117 1863329 start.go:563] Will wait 60s for crictl version
I0127 14:19:07.165209 1863329 ssh_runner.go:195] Run: which crictl
I0127 14:19:07.168995 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 14:19:07.209167 1863329 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 14:19:07.209244 1863329 ssh_runner.go:195] Run: containerd --version
I0127 14:19:07.236320 1863329 ssh_runner.go:195] Run: containerd --version
I0127 14:19:07.261054 1863329 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 14:19:07.262245 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetIP
I0127 14:19:07.265288 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:07.265739 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:07.265772 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:07.265980 1863329 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0127 14:19:07.270111 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 14:19:07.283905 1863329 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0127 14:19:07.406716 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:09.905446 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:07.285143 1863329 kubeadm.go:883] updating cluster {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 14:19:07.285271 1863329 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 14:19:07.285342 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:19:07.314913 1863329 containerd.go:627] all images are preloaded for containerd runtime.
I0127 14:19:07.314944 1863329 containerd.go:534] Images already preloaded, skipping extraction
I0127 14:19:07.315010 1863329 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:19:07.345742 1863329 containerd.go:627] all images are preloaded for containerd runtime.
I0127 14:19:07.345770 1863329 cache_images.go:84] Images are preloaded, skipping loading
I0127 14:19:07.345779 1863329 kubeadm.go:934] updating node { 192.168.72.17 8443 v1.32.1 containerd true true} ...
I0127 14:19:07.345897 1863329 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-309688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.17
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 14:19:07.345956 1863329 ssh_runner.go:195] Run: sudo crictl info
I0127 14:19:07.379712 1863329 cni.go:84] Creating CNI manager for ""
I0127 14:19:07.379740 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:19:07.379759 1863329 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0127 14:19:07.379800 1863329 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.17 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-309688 NodeName:newest-cni-309688 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 14:19:07.379979 1863329 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.17
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-309688"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.17"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.17"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 14:19:07.380049 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 14:19:07.390315 1863329 binaries.go:44] Found k8s binaries, skipping transfer
I0127 14:19:07.390456 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 14:19:07.399585 1863329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0127 14:19:07.417531 1863329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 14:19:07.433514 1863329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0127 14:19:07.449318 1863329 ssh_runner.go:195] Run: grep 192.168.72.17 control-plane.minikube.internal$ /etc/hosts
I0127 14:19:07.452848 1863329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.17 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 14:19:07.464375 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:19:07.590492 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 14:19:07.609018 1863329 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688 for IP: 192.168.72.17
I0127 14:19:07.609048 1863329 certs.go:194] generating shared ca certs ...
I0127 14:19:07.609072 1863329 certs.go:226] acquiring lock for ca certs: {Name:mkc6b95fb3d2c0d0c7049cde446028a0d731f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:19:07.609277 1863329 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key
I0127 14:19:07.609328 1863329 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key
I0127 14:19:07.609339 1863329 certs.go:256] generating profile certs ...
I0127 14:19:07.609434 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/client.key
I0127 14:19:07.609500 1863329 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key.54b7a6ae
I0127 14:19:07.609534 1863329 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key
I0127 14:19:07.609661 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem (1338 bytes)
W0127 14:19:07.609700 1863329 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070_empty.pem, impossibly tiny 0 bytes
I0127 14:19:07.609707 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca-key.pem (1675 bytes)
I0127 14:19:07.609732 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/ca.pem (1078 bytes)
I0127 14:19:07.609776 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/cert.pem (1123 bytes)
I0127 14:19:07.609807 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/key.pem (1675 bytes)
I0127 14:19:07.609872 1863329 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem (1708 bytes)
I0127 14:19:07.613389 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 14:19:07.649675 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 14:19:07.678577 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 14:19:07.707466 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 14:19:07.736820 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 14:19:07.764078 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 14:19:07.791040 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 14:19:07.817979 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/profiles/newest-cni-309688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 14:19:07.846978 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 14:19:07.869002 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/certs/1806070.pem --> /usr/share/ca-certificates/1806070.pem (1338 bytes)
I0127 14:19:07.892530 1863329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-1798877/.minikube/files/etc/ssl/certs/18060702.pem --> /usr/share/ca-certificates/18060702.pem (1708 bytes)
I0127 14:19:07.917138 1863329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 14:19:07.933638 1863329 ssh_runner.go:195] Run: openssl version
I0127 14:19:07.939662 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 14:19:07.951267 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 14:19:07.955439 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:02 /usr/share/ca-certificates/minikubeCA.pem
I0127 14:19:07.955494 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 14:19:07.961014 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 14:19:07.972145 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806070.pem && ln -fs /usr/share/ca-certificates/1806070.pem /etc/ssl/certs/1806070.pem"
I0127 14:19:07.983517 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806070.pem
I0127 14:19:07.987671 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:10 /usr/share/ca-certificates/1806070.pem
I0127 14:19:07.987719 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806070.pem
I0127 14:19:07.993079 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806070.pem /etc/ssl/certs/51391683.0"
I0127 14:19:08.004139 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18060702.pem && ln -fs /usr/share/ca-certificates/18060702.pem /etc/ssl/certs/18060702.pem"
I0127 14:19:08.015248 1863329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18060702.pem
I0127 14:19:08.019068 1863329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:10 /usr/share/ca-certificates/18060702.pem
I0127 14:19:08.019113 1863329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18060702.pem
I0127 14:19:08.024062 1863329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18060702.pem /etc/ssl/certs/3ec20f2e.0"
I0127 14:19:08.033948 1863329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 14:19:08.038251 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 14:19:08.043547 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 14:19:08.048804 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 14:19:08.054182 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 14:19:08.059290 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 14:19:08.064227 1863329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 14:19:08.069315 1863329 kubeadm.go:392] StartCluster: {Name:newest-cni-309688 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-309688 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:19:08.069441 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 14:19:08.069490 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 14:19:08.106407 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
I0127 14:19:08.106434 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
I0127 14:19:08.106441 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
I0127 14:19:08.106446 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
I0127 14:19:08.106450 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
I0127 14:19:08.106455 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
I0127 14:19:08.106459 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
I0127 14:19:08.106463 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
I0127 14:19:08.106467 1863329 cri.go:89] found id: ""
I0127 14:19:08.106525 1863329 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 14:19:08.121718 1863329 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T14:19:08Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 14:19:08.121817 1863329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 14:19:08.131128 1863329 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 14:19:08.131152 1863329 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 14:19:08.131206 1863329 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 14:19:08.141323 1863329 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 14:19:08.142436 1863329 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-309688" does not appear in /home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:19:08.143126 1863329 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-1798877/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-309688" cluster setting kubeconfig missing "newest-cni-309688" context setting]
I0127 14:19:08.144090 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:19:08.145938 1863329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 14:19:08.155827 1863329 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.17
I0127 14:19:08.155862 1863329 kubeadm.go:1160] stopping kube-system containers ...
I0127 14:19:08.155887 1863329 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 14:19:08.155943 1863329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 14:19:08.191753 1863329 cri.go:89] found id: "44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666"
I0127 14:19:08.191787 1863329 cri.go:89] found id: "d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186"
I0127 14:19:08.191794 1863329 cri.go:89] found id: "7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf"
I0127 14:19:08.191799 1863329 cri.go:89] found id: "72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199"
I0127 14:19:08.191804 1863329 cri.go:89] found id: "0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698"
I0127 14:19:08.191808 1863329 cri.go:89] found id: "0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63"
I0127 14:19:08.191812 1863329 cri.go:89] found id: "2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb"
I0127 14:19:08.191817 1863329 cri.go:89] found id: "89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe"
I0127 14:19:08.191822 1863329 cri.go:89] found id: ""
I0127 14:19:08.191829 1863329 cri.go:252] Stopping containers: [44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe]
I0127 14:19:08.191909 1863329 ssh_runner.go:195] Run: which crictl
I0127 14:19:08.195781 1863329 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 44b672df53953b732ea500d76a4756206dc50a08c2d6b754926b1020d937a666 d08ad8936ceecf173622b281d5ae29f9fbdbd8fe6353ed74c00a8e8b03334186 7fc387defef0a4c4430ceb40aa56357e3f8ea2077e77e299bb4b9ccb7a6a75cf 72112d2b81cdd6ac4560355f744a26e9c5cd6cd2e9f9f63202a712a16dfa5199 0a8a0cffc7917f1830cb86377be31b37fb058bfe76809a93b25e1dc44dad8698 0bf821f494ac942182c8a3fca0a6155ad4325e877c929f8ef786df037f782f63 2fa74aab2d8093b8579b8fd14703a42fd0048faec3516163708a7a8983c472bb 89b35d977739b1ce363c0dfb07c53551dff2297a944ca70140b27fddb89fcbfe
I0127 14:19:08.232200 1863329 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 14:19:08.248830 1863329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 14:19:08.258186 1863329 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 14:19:08.258248 1863329 kubeadm.go:157] found existing configuration files:
I0127 14:19:08.258301 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 14:19:08.266710 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 14:19:08.266787 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 14:19:08.276679 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 14:19:08.285327 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 14:19:08.285384 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 14:19:08.293919 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 14:19:08.302352 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 14:19:08.302466 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 14:19:08.314481 1863329 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 14:19:08.324318 1863329 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 14:19:08.324378 1863329 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 14:19:08.333925 1863329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 14:19:08.343981 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:08.484856 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:09.407056 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:09.612649 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:09.691321 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:09.780355 1863329 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:19:09.780450 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:19:10.281441 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:19:10.780982 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:19:10.803824 1863329 api_server.go:72] duration metric: took 1.023465596s to wait for apiserver process to appear ...
I0127 14:19:10.803860 1863329 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:19:10.803886 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:10.804578 1863329 api_server.go:269] stopped: https://192.168.72.17:8443/healthz: Get "https://192.168.72.17:8443/healthz": dial tcp 192.168.72.17:8443: connect: connection refused
I0127 14:19:11.304934 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:11.906081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:13.906183 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:13.554007 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 14:19:13.554040 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 14:19:13.554061 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:13.596380 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 14:19:13.596419 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 14:19:13.804894 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:13.819580 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:13.819610 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:14.304214 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:14.309598 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:14.309627 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:14.804236 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:14.809512 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:14.809551 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:15.304181 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:15.309590 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:15.309618 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:15.803958 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:15.813848 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:15.813901 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:16.304624 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:16.310313 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 14:19:16.310345 1863329 api_server.go:103] status: https://192.168.72.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 14:19:16.804590 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:16.809168 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
ok
I0127 14:19:16.816088 1863329 api_server.go:141] control plane version: v1.32.1
I0127 14:19:16.816123 1863329 api_server.go:131] duration metric: took 6.012253595s to wait for apiserver health ...
I0127 14:19:16.816135 1863329 cni.go:84] Creating CNI manager for ""
I0127 14:19:16.816144 1863329 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:19:16.817843 1863329 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 14:19:16.819038 1863329 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 14:19:16.829479 1863329 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 14:19:16.847164 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:19:16.857140 1863329 system_pods.go:59] 9 kube-system pods found
I0127 14:19:16.857176 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:19:16.857187 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:19:16.857198 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 14:19:16.857210 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 14:19:16.857219 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 14:19:16.857227 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
I0127 14:19:16.857236 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 14:19:16.857263 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:19:16.857277 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
I0127 14:19:16.857287 1863329 system_pods.go:74] duration metric: took 10.102454ms to wait for pod list to return data ...
I0127 14:19:16.857300 1863329 node_conditions.go:102] verifying NodePressure condition ...
I0127 14:19:16.860835 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 14:19:16.860862 1863329 node_conditions.go:123] node cpu capacity is 2
I0127 14:19:16.860886 1863329 node_conditions.go:105] duration metric: took 3.575582ms to run NodePressure ...
I0127 14:19:16.860913 1863329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 14:19:17.133479 1863329 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 14:19:17.144656 1863329 ops.go:34] apiserver oom_adj: -16
I0127 14:19:17.144684 1863329 kubeadm.go:597] duration metric: took 9.013524206s to restartPrimaryControlPlane
I0127 14:19:17.144695 1863329 kubeadm.go:394] duration metric: took 9.075390076s to StartCluster
I0127 14:19:17.144715 1863329 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:19:17.144810 1863329 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:19:17.146498 1863329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:19:17.146819 1863329 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 14:19:17.146906 1863329 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 14:19:17.147019 1863329 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-309688"
I0127 14:19:17.147042 1863329 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-309688"
I0127 14:19:17.147041 1863329 addons.go:69] Setting default-storageclass=true in profile "newest-cni-309688"
W0127 14:19:17.147054 1863329 addons.go:247] addon storage-provisioner should already be in state true
I0127 14:19:17.147075 1863329 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-309688"
I0127 14:19:17.147081 1863329 config.go:182] Loaded profile config "newest-cni-309688": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:19:17.147079 1863329 addons.go:69] Setting dashboard=true in profile "newest-cni-309688"
I0127 14:19:17.147063 1863329 addons.go:69] Setting metrics-server=true in profile "newest-cni-309688"
I0127 14:19:17.147150 1863329 addons.go:238] Setting addon metrics-server=true in "newest-cni-309688"
W0127 14:19:17.147164 1863329 addons.go:247] addon metrics-server should already be in state true
I0127 14:19:17.147190 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
I0127 14:19:17.147088 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
I0127 14:19:17.147127 1863329 addons.go:238] Setting addon dashboard=true in "newest-cni-309688"
W0127 14:19:17.147431 1863329 addons.go:247] addon dashboard should already be in state true
I0127 14:19:17.147463 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
I0127 14:19:17.147523 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.147558 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.147565 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.147607 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.147687 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.147718 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.147797 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.147810 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.148440 1863329 out.go:177] * Verifying Kubernetes components...
I0127 14:19:17.149687 1863329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:19:17.163903 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
I0127 14:19:17.164136 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
I0127 14:19:17.164313 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.164874 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.165122 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.165143 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.165396 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.165415 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.165676 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.165822 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.165886 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:19:17.166471 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.166526 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.175217 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
I0127 14:19:17.175873 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.176532 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.176558 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.176979 1863329 addons.go:238] Setting addon default-storageclass=true in "newest-cni-309688"
I0127 14:19:17.176997 1863329 main.go:141] libmachine: () Calling .GetMachineName
W0127 14:19:17.177002 1863329 addons.go:247] addon default-storageclass should already be in state true
I0127 14:19:17.177080 1863329 host.go:66] Checking if "newest-cni-309688" exists ...
I0127 14:19:17.177500 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.177518 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.177541 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.177556 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.192916 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
I0127 14:19:17.193458 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.194088 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.194110 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.194524 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.195179 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.195214 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.196238 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
I0127 14:19:17.196598 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.196918 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
I0127 14:19:17.197180 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.197200 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.197360 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.197480 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.197523 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
I0127 14:19:17.197802 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.197813 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.198103 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.198164 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.198321 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:19:17.198535 1863329 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:19:17.198583 1863329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:19:17.198888 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.198902 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.199305 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.199518 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:19:17.200369 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:17.201165 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:17.202593 1863329 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 14:19:17.202676 1863329 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 14:19:17.203794 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 14:19:17.203807 1863329 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 14:19:17.203824 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:17.203911 1863329 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:19:17.203926 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 14:19:17.203944 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:17.207477 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.207978 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:17.208029 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.208889 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:17.209077 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:17.209227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:17.209363 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:17.216222 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.216592 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
I0127 14:19:17.216814 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:17.216831 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.216961 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.217064 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:17.217256 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:17.217411 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.217422 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.217463 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:17.217578 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:17.217795 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.217839 1863329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
I0127 14:19:17.218152 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:19:17.218203 1863329 main.go:141] libmachine: () Calling .GetVersion
I0127 14:19:17.218804 1863329 main.go:141] libmachine: Using API Version 1
I0127 14:19:17.218816 1863329 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:19:17.219270 1863329 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:19:17.219480 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetState
I0127 14:19:17.219969 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:17.220954 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .DriverName
I0127 14:19:17.221278 1863329 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 14:19:17.221291 1863329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 14:19:17.221312 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:17.221888 1863329 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 14:19:17.223572 1863329 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 14:19:17.225013 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 14:19:17.225038 1863329 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 14:19:17.225052 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHHostname
I0127 14:19:17.225188 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.225554 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:17.225777 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.225825 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:17.226023 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:17.226118 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:17.226242 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:17.228625 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.228937 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:25:ab", ip: ""} in network mk-newest-cni-309688: {Iface:virbr4 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:1b:25:ab Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:newest-cni-309688 Clientid:01:52:54:00:1b:25:ab}
I0127 14:19:17.228977 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | domain newest-cni-309688 has defined IP address 192.168.72.17 and MAC address 52:54:00:1b:25:ab in network mk-newest-cni-309688
I0127 14:19:17.229171 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHPort
I0127 14:19:17.229344 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHKeyPath
I0127 14:19:17.229536 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .GetSSHUsername
I0127 14:19:17.229794 1863329 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/newest-cni-309688/id_rsa Username:docker}
I0127 14:19:17.331878 1863329 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 14:19:17.351919 1863329 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:19:17.352011 1863329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:19:17.365611 1863329 api_server.go:72] duration metric: took 218.744274ms to wait for apiserver process to appear ...
I0127 14:19:17.365637 1863329 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:19:17.365655 1863329 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8443/healthz ...
I0127 14:19:17.372023 1863329 api_server.go:279] https://192.168.72.17:8443/healthz returned 200:
ok
I0127 14:19:17.373577 1863329 api_server.go:141] control plane version: v1.32.1
I0127 14:19:17.373603 1863329 api_server.go:131] duration metric: took 7.959402ms to wait for apiserver health ...
I0127 14:19:17.373612 1863329 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:19:17.382361 1863329 system_pods.go:59] 9 kube-system pods found
I0127 14:19:17.382397 1863329 system_pods.go:61] "coredns-668d6bf9bc-f66f4" [b30ba9c6-eb6e-44c1-b389-96263bb405a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:19:17.382408 1863329 system_pods.go:61] "coredns-668d6bf9bc-pt6d2" [d8cf3c75-3646-40d8-8131-efd331e2cec7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 14:19:17.382420 1863329 system_pods.go:61] "etcd-newest-cni-309688" [f963f636-8186-4dd8-8263-b7bc29d15bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 14:19:17.382430 1863329 system_pods.go:61] "kube-apiserver-newest-cni-309688" [90584ec0-2731-48e1-a2e6-5bef7b170386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 14:19:17.382453 1863329 system_pods.go:61] "kube-controller-manager-newest-cni-309688" [ef3b3e37-08d8-48aa-a55e-55d4c87c8189] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 14:19:17.382460 1863329 system_pods.go:61] "kube-proxy-8mwp9" [ebb658f3-eba2-4743-94cd-da996046bd02] Running
I0127 14:19:17.382473 1863329 system_pods.go:61] "kube-scheduler-newest-cni-309688" [07bcbbe9-474a-4bc2-9f58-f889fa685754] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 14:19:17.382480 1863329 system_pods.go:61] "metrics-server-f79f97bbb-jw4m9" [85929ac8-142c-4bc7-90da-5c13f9ff3c0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:19:17.382486 1863329 system_pods.go:61] "storage-provisioner" [94820502-acf5-4297-8fc9-d4b4953b01ab] Running
I0127 14:19:17.382496 1863329 system_pods.go:74] duration metric: took 8.875555ms to wait for pod list to return data ...
I0127 14:19:17.382507 1863329 default_sa.go:34] waiting for default service account to be created ...
I0127 14:19:17.385289 1863329 default_sa.go:45] found service account: "default"
I0127 14:19:17.385310 1863329 default_sa.go:55] duration metric: took 2.794486ms for default service account to be created ...
I0127 14:19:17.385319 1863329 kubeadm.go:582] duration metric: took 238.459291ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 14:19:17.385341 1863329 node_conditions.go:102] verifying NodePressure condition ...
I0127 14:19:17.388555 1863329 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 14:19:17.388583 1863329 node_conditions.go:123] node cpu capacity is 2
I0127 14:19:17.388596 1863329 node_conditions.go:105] duration metric: took 3.249906ms to run NodePressure ...
I0127 14:19:17.388610 1863329 start.go:241] waiting for startup goroutines ...
I0127 14:19:17.418149 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:19:17.421312 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 14:19:17.421340 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 14:19:17.438395 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 14:19:17.454881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 14:19:17.454907 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 14:19:17.463957 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 14:19:17.463983 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 14:19:17.511881 1863329 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:19:17.511918 1863329 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 14:19:17.526875 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 14:19:17.526902 1863329 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 14:19:17.564740 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:19:17.593901 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:19:17.593956 1863329 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:19:17.686229 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:19:17.686255 1863329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 14:19:17.771605 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:19:17.771642 1863329 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:19:17.858960 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:19:17.858995 1863329 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:19:17.968615 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:19:17.968653 1863329 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:19:18.103281 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:19:18.103311 1863329 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:19:18.180707 1863329 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:19:18.180741 1863329 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:19:18.229422 1863329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:19:19.526682 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088226902s)
I0127 14:19:19.526763 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.526777 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.526802 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962012351s)
I0127 14:19:19.526851 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.526861 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.108674811s)
I0127 14:19:19.526875 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.526891 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.526910 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.527161 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
I0127 14:19:19.527203 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.527212 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.527219 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.527227 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.528059 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.528072 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.528080 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.528088 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.528229 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.528239 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.528293 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
I0127 14:19:19.528342 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.528349 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.528356 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.528362 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.528502 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
I0127 14:19:19.528531 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.528538 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.528548 1863329 addons.go:479] Verifying addon metrics-server=true in "newest-cni-309688"
I0127 14:19:19.528986 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.529006 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.529009 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
I0127 14:19:19.552242 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.552274 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.552631 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.552650 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.709148 1863329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.47964575s)
I0127 14:19:19.709210 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.709226 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.709584 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.709606 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.709613 1863329 main.go:141] libmachine: Making call to close driver server
I0127 14:19:19.709610 1863329 main.go:141] libmachine: (newest-cni-309688) DBG | Closing plugin on server side
I0127 14:19:19.709620 1863329 main.go:141] libmachine: (newest-cni-309688) Calling .Close
I0127 14:19:19.709911 1863329 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:19:19.709925 1863329 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:19:19.711462 1863329 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-309688 addons enable metrics-server
I0127 14:19:19.712846 1863329 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I0127 14:19:19.714093 1863329 addons.go:514] duration metric: took 2.567193619s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I0127 14:19:19.714146 1863329 start.go:246] waiting for cluster config update ...
I0127 14:19:19.714163 1863329 start.go:255] writing updated cluster config ...
I0127 14:19:19.714515 1863329 ssh_runner.go:195] Run: rm -f paused
I0127 14:19:19.771292 1863329 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 14:19:19.773125 1863329 out.go:177] * Done! kubectl is now configured to use "newest-cni-309688" cluster and "default" namespace by default
I0127 14:19:16.407410 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:18.408328 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:20.905706 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:22.906390 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:25.405847 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:27.406081 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:29.406653 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:31.905101 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:33.906032 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:36.406416 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:38.905541 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:41.405451 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:43.405883 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:45.905497 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:47.905917 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:50.405296 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:52.405989 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:54.905953 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:56.906021 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:19:58.906598 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:01.405909 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:03.406128 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:05.906092 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:08.405216 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:10.405449 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:12.905583 1860751 pod_ready.go:103] pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:14.399935 1860751 pod_ready.go:82] duration metric: took 4m0.000530283s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" ...
E0127 14:20:14.399966 1860751 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m4ddb" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 14:20:14.399992 1860751 pod_ready.go:39] duration metric: took 4m31.410913398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:20:14.400032 1860751 kubeadm.go:597] duration metric: took 5m29.594675564s to restartPrimaryControlPlane
W0127 14:20:14.400141 1860751 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 14:20:14.400175 1860751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 14:20:15.909704 1860751 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.509493932s)
I0127 14:20:15.909782 1860751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 14:20:15.925857 1860751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 14:20:15.935803 1860751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 14:20:15.946508 1860751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 14:20:15.946527 1860751 kubeadm.go:157] found existing configuration files:
I0127 14:20:15.946566 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
I0127 14:20:15.956633 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 14:20:15.956690 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 14:20:15.966965 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
I0127 14:20:15.984740 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 14:20:15.984801 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 14:20:15.995541 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
I0127 14:20:16.005543 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 14:20:16.005605 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 14:20:16.015855 1860751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
I0127 14:20:16.025594 1860751 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 14:20:16.025640 1860751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 14:20:16.035989 1860751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 14:20:16.197395 1860751 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 14:20:24.074171 1860751 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 14:20:24.074259 1860751 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 14:20:24.074369 1860751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 14:20:24.074528 1860751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 14:20:24.074657 1860751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 14:20:24.074731 1860751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 14:20:24.076292 1860751 out.go:235] - Generating certificates and keys ...
I0127 14:20:24.076373 1860751 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 14:20:24.076450 1860751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 14:20:24.076532 1860751 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 14:20:24.076585 1860751 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 14:20:24.076644 1860751 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 14:20:24.076713 1860751 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 14:20:24.076800 1860751 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 14:20:24.076884 1860751 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 14:20:24.076992 1860751 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 14:20:24.077103 1860751 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 14:20:24.077169 1860751 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 14:20:24.077243 1860751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 14:20:24.077289 1860751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 14:20:24.077349 1860751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 14:20:24.077397 1860751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 14:20:24.077468 1860751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 14:20:24.077537 1860751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 14:20:24.077610 1860751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 14:20:24.077669 1860751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 14:20:24.078852 1860751 out.go:235] - Booting up control plane ...
I0127 14:20:24.078965 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 14:20:24.079055 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 14:20:24.079140 1860751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 14:20:24.079285 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 14:20:24.079429 1860751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 14:20:24.079489 1860751 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 14:20:24.079690 1860751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 14:20:24.079833 1860751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 14:20:24.079921 1860751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61135ms
I0127 14:20:24.080007 1860751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 14:20:24.080110 1860751 kubeadm.go:310] [api-check] The API server is healthy after 5.001239504s
I0127 14:20:24.080256 1860751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 14:20:24.080387 1860751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 14:20:24.080441 1860751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 14:20:24.080637 1860751 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-212529 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 14:20:24.080711 1860751 kubeadm.go:310] [bootstrap-token] Using token: pxjq5d.hk6ws8nooc0hkr03
I0127 14:20:24.082018 1860751 out.go:235] - Configuring RBAC rules ...
I0127 14:20:24.082176 1860751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 14:20:24.082314 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 14:20:24.082518 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 14:20:24.082703 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 14:20:24.082889 1860751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 14:20:24.083015 1860751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 14:20:24.083173 1860751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 14:20:24.083250 1860751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 14:20:24.083301 1860751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 14:20:24.083311 1860751 kubeadm.go:310]
I0127 14:20:24.083396 1860751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 14:20:24.083407 1860751 kubeadm.go:310]
I0127 14:20:24.083513 1860751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 14:20:24.083522 1860751 kubeadm.go:310]
I0127 14:20:24.083558 1860751 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 14:20:24.083655 1860751 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 14:20:24.083734 1860751 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 14:20:24.083743 1860751 kubeadm.go:310]
I0127 14:20:24.083802 1860751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 14:20:24.083810 1860751 kubeadm.go:310]
I0127 14:20:24.083852 1860751 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 14:20:24.083858 1860751 kubeadm.go:310]
I0127 14:20:24.083921 1860751 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 14:20:24.084043 1860751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 14:20:24.084140 1860751 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 14:20:24.084149 1860751 kubeadm.go:310]
I0127 14:20:24.084263 1860751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 14:20:24.084383 1860751 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 14:20:24.084400 1860751 kubeadm.go:310]
I0127 14:20:24.084497 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
I0127 14:20:24.084584 1860751 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e \
I0127 14:20:24.084604 1860751 kubeadm.go:310] --control-plane
I0127 14:20:24.084610 1860751 kubeadm.go:310]
I0127 14:20:24.084679 1860751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 14:20:24.084685 1860751 kubeadm.go:310]
I0127 14:20:24.084750 1860751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token pxjq5d.hk6ws8nooc0hkr03 \
I0127 14:20:24.084894 1860751 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:da793a243b54c5383b132bcbdadb0739d427211c6d5d2593cf9375377ad7834e
I0127 14:20:24.084923 1860751 cni.go:84] Creating CNI manager for ""
I0127 14:20:24.084937 1860751 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 14:20:24.086257 1860751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 14:20:24.087300 1860751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 14:20:24.097744 1860751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 14:20:24.115867 1860751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 14:20:24.115958 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:24.115962 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-212529 minikube.k8s.io/updated_at=2025_01_27T14_20_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-212529 minikube.k8s.io/primary=true
I0127 14:20:24.324045 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:24.324042 1860751 ops.go:34] apiserver oom_adj: -16
I0127 14:20:24.824528 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:25.324196 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:25.824971 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:26.324285 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:26.825007 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:27.324812 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:27.824252 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:28.324496 1860751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 14:20:28.413845 1860751 kubeadm.go:1113] duration metric: took 4.297974897s to wait for elevateKubeSystemPrivileges
I0127 14:20:28.413890 1860751 kubeadm.go:394] duration metric: took 5m43.681075591s to StartCluster
I0127 14:20:28.413911 1860751 settings.go:142] acquiring lock: {Name:mk26fe6d7b14cf85ba842a23d71a5c576b147570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:20:28.414029 1860751 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20327-1798877/kubeconfig
I0127 14:20:28.416135 1860751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-1798877/kubeconfig: {Name:mk83da0b53bf0d0962bc51b16c589da37a41b6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 14:20:28.416434 1860751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 14:20:28.416580 1860751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 14:20:28.416710 1860751 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-212529"
I0127 14:20:28.416715 1860751 config.go:182] Loaded profile config "default-k8s-diff-port-212529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:20:28.416736 1860751 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-212529"
W0127 14:20:28.416745 1860751 addons.go:247] addon storage-provisioner should already be in state true
I0127 14:20:28.416742 1860751 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-212529"
I0127 14:20:28.416759 1860751 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-212529"
I0127 14:20:28.416785 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
I0127 14:20:28.416797 1860751 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-212529"
W0127 14:20:28.416807 1860751 addons.go:247] addon dashboard should already be in state true
I0127 14:20:28.416843 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
I0127 14:20:28.417198 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.417233 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.417240 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.417275 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.416772 1860751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-212529"
I0127 14:20:28.416777 1860751 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-212529"
I0127 14:20:28.417322 1860751 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-212529"
W0127 14:20:28.417337 1860751 addons.go:247] addon metrics-server should already be in state true
I0127 14:20:28.417560 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
I0127 14:20:28.417900 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.417916 1860751 out.go:177] * Verifying Kubernetes components...
I0127 14:20:28.417955 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.417963 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.418005 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.419061 1860751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 14:20:28.434949 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
I0127 14:20:28.435505 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.436082 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.436114 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.436521 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.436752 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
I0127 14:20:28.437523 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
I0127 14:20:28.437697 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
I0127 14:20:28.438072 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.438417 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.438657 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.438682 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.438906 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.438929 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.439056 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.439281 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.439489 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
I0127 14:20:28.439624 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.439660 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.439804 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.439846 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.439944 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.440409 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.440432 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.440811 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.441377 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.441420 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.441785 1860751 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-212529"
W0127 14:20:28.441804 1860751 addons.go:247] addon default-storageclass should already be in state true
I0127 14:20:28.441836 1860751 host.go:66] Checking if "default-k8s-diff-port-212529" exists ...
I0127 14:20:28.442074 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.442111 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.460558 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
I0127 14:20:28.461043 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
I0127 14:20:28.461200 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.461461 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.461725 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.461749 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.461814 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
I0127 14:20:28.462061 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.462083 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.462286 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.462330 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.462485 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
I0127 14:20:28.462605 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.462762 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.462775 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.462832 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
I0127 14:20:28.463228 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.463817 1860751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20327-1798877/.minikube/bin/docker-machine-driver-kvm2
I0127 14:20:28.463862 1860751 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:20:28.464659 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
I0127 14:20:28.465253 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
I0127 14:20:28.466108 1860751 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 14:20:28.466667 1860751 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 14:20:28.467300 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 14:20:28.467316 1860751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 14:20:28.467333 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
I0127 14:20:28.469055 1860751 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 14:20:28.469287 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
I0127 14:20:28.469629 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.470009 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 14:20:28.470027 1860751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 14:20:28.470055 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
I0127 14:20:28.470158 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.470180 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.470774 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.470967 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
I0127 14:20:28.471164 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.471781 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
I0127 14:20:28.471814 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.472153 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
I0127 14:20:28.472327 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
I0127 14:20:28.472488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
I0127 14:20:28.472639 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
I0127 14:20:28.473502 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
I0127 14:20:28.473853 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.474311 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
I0127 14:20:28.474338 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.474488 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
I0127 14:20:28.474652 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
I0127 14:20:28.474805 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
I0127 14:20:28.474896 1860751 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 14:20:28.474964 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
I0127 14:20:28.475898 1860751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:20:28.475916 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 14:20:28.475933 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
I0127 14:20:28.478521 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.478927 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
I0127 14:20:28.478950 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.479131 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
I0127 14:20:28.479325 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
I0127 14:20:28.479479 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
I0127 14:20:28.479622 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
I0127 14:20:28.482246 1860751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
I0127 14:20:28.482637 1860751 main.go:141] libmachine: () Calling .GetVersion
I0127 14:20:28.483047 1860751 main.go:141] libmachine: Using API Version 1
I0127 14:20:28.483068 1860751 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:20:28.483409 1860751 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:20:28.483542 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetState
I0127 14:20:28.484999 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .DriverName
I0127 14:20:28.485241 1860751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 14:20:28.485259 1860751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 14:20:28.485276 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHHostname
I0127 14:20:28.488061 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.488402 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:8f:73", ip: ""} in network mk-default-k8s-diff-port-212529: {Iface:virbr2 ExpiryTime:2025-01-27 15:14:32 +0000 UTC Type:0 Mac:52:54:00:b1:8f:73 Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:default-k8s-diff-port-212529 Clientid:01:52:54:00:b1:8f:73}
I0127 14:20:28.488429 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | domain default-k8s-diff-port-212529 has defined IP address 192.168.50.145 and MAC address 52:54:00:b1:8f:73 in network mk-default-k8s-diff-port-212529
I0127 14:20:28.488581 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHPort
I0127 14:20:28.488725 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHKeyPath
I0127 14:20:28.488858 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .GetSSHUsername
I0127 14:20:28.489030 1860751 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-1798877/.minikube/machines/default-k8s-diff-port-212529/id_rsa Username:docker}
I0127 14:20:28.646865 1860751 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 14:20:28.672532 1860751 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-212529" to be "Ready" ...
I0127 14:20:28.703176 1860751 node_ready.go:49] node "default-k8s-diff-port-212529" has status "Ready":"True"
I0127 14:20:28.703197 1860751 node_ready.go:38] duration metric: took 30.636379ms for node "default-k8s-diff-port-212529" to be "Ready" ...
I0127 14:20:28.703206 1860751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:20:28.710494 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
I0127 14:20:28.817820 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 14:20:28.817849 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 14:20:28.837871 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 14:20:28.851072 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 14:20:28.851107 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 14:20:28.852529 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 14:20:28.858946 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 14:20:28.858978 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 14:20:28.897376 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 14:20:28.897409 1860751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 14:20:28.944458 1860751 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:20:28.944489 1860751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 14:20:28.996770 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:20:28.996799 1860751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:20:29.041836 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 14:20:29.066199 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:20:29.066234 1860751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 14:20:29.191066 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:20:29.191092 1860751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:20:29.292937 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:20:29.292970 1860751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:20:29.324574 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:20:29.324605 1860751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:20:29.381589 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:20:29.381618 1860751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:20:29.579396 1860751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:20:29.579421 1860751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:20:29.730806 1860751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:20:30.332634 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.480056609s)
I0127 14:20:30.332719 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.332740 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.332753 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494842628s)
I0127 14:20:30.332799 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.332812 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.333060 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.333080 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.333120 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.333128 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.333246 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.333271 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.333280 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.333287 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.333331 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
I0127 14:20:30.333499 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.333513 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.335273 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.335291 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.402574 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.402607 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.402929 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.402951 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.597814 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555933063s)
I0127 14:20:30.597873 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.597890 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.598223 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.598244 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.598254 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:30.598262 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:30.598523 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:30.598545 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:30.598558 1860751 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-212529"
I0127 14:20:30.720235 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:31.251992 1860751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.52112686s)
I0127 14:20:31.252076 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:31.252099 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:31.252456 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:31.252477 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:31.252487 1860751 main.go:141] libmachine: Making call to close driver server
I0127 14:20:31.252495 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) Calling .Close
I0127 14:20:31.252788 1860751 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:20:31.252797 1860751 main.go:141] libmachine: (default-k8s-diff-port-212529) DBG | Closing plugin on server side
I0127 14:20:31.252810 1860751 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:20:31.254461 1860751 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-212529 addons enable metrics-server
I0127 14:20:31.255681 1860751 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 14:20:31.256922 1860751 addons.go:514] duration metric: took 2.840355251s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 14:20:33.216592 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:35.217244 1860751 pod_ready.go:103] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"False"
I0127 14:20:37.731702 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:37.731733 1860751 pod_ready.go:82] duration metric: took 9.021206919s for pod "coredns-668d6bf9bc-g77l4" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.731747 1860751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.761047 1860751 pod_ready.go:93] pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:37.761074 1860751 pod_ready.go:82] duration metric: took 29.318136ms for pod "coredns-668d6bf9bc-gwfcp" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.761084 1860751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.772463 1860751 pod_ready.go:93] pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:37.772491 1860751 pod_ready.go:82] duration metric: took 11.399303ms for pod "etcd-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.772504 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.780269 1860751 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:37.780294 1860751 pod_ready.go:82] duration metric: took 7.782307ms for pod "kube-apiserver-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.780306 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.785276 1860751 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:37.785304 1860751 pod_ready.go:82] duration metric: took 4.986421ms for pod "kube-controller-manager-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:37.785315 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
I0127 14:20:38.114939 1860751 pod_ready.go:93] pod "kube-proxy-f5fmd" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:38.114969 1860751 pod_ready.go:82] duration metric: took 329.644964ms for pod "kube-proxy-f5fmd" in "kube-system" namespace to be "Ready" ...
I0127 14:20:38.114981 1860751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:38.515806 1860751 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace has status "Ready":"True"
I0127 14:20:38.515832 1860751 pod_ready.go:82] duration metric: took 400.844808ms for pod "kube-scheduler-default-k8s-diff-port-212529" in "kube-system" namespace to be "Ready" ...
I0127 14:20:38.515841 1860751 pod_ready.go:39] duration metric: took 9.812625577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 14:20:38.515859 1860751 api_server.go:52] waiting for apiserver process to appear ...
I0127 14:20:38.515918 1860751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:20:38.534333 1860751 api_server.go:72] duration metric: took 10.117851719s to wait for apiserver process to appear ...
I0127 14:20:38.534364 1860751 api_server.go:88] waiting for apiserver healthz status ...
I0127 14:20:38.534390 1860751 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8444/healthz ...
I0127 14:20:38.540410 1860751 api_server.go:279] https://192.168.50.145:8444/healthz returned 200:
ok
I0127 14:20:38.541651 1860751 api_server.go:141] control plane version: v1.32.1
I0127 14:20:38.541674 1860751 api_server.go:131] duration metric: took 7.30288ms to wait for apiserver health ...
I0127 14:20:38.541685 1860751 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 14:20:38.725366 1860751 system_pods.go:59] 9 kube-system pods found
I0127 14:20:38.725397 1860751 system_pods.go:61] "coredns-668d6bf9bc-g77l4" [4457b047-3339-455e-ab06-15a1e4d7a95f] Running
I0127 14:20:38.725402 1860751 system_pods.go:61] "coredns-668d6bf9bc-gwfcp" [d557581e-b74a-482d-9c8c-12e1b51d11d5] Running
I0127 14:20:38.725406 1860751 system_pods.go:61] "etcd-default-k8s-diff-port-212529" [1e347129-845b-4c34-831c-e056cccc90f7] Running
I0127 14:20:38.725410 1860751 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-212529" [1472d317-bd0d-4957-a955-d69eb5339d2a] Running
I0127 14:20:38.725414 1860751 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-212529" [0e5e7440-7389-4bc8-9ee5-0e8041edef25] Running
I0127 14:20:38.725417 1860751 system_pods.go:61] "kube-proxy-f5fmd" [a08f6d90-467b-4972-8c03-d62d07e108e5] Running
I0127 14:20:38.725422 1860751 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-212529" [34188644-73d6-4567-856a-895cef0abac8] Running
I0127 14:20:38.725431 1860751 system_pods.go:61] "metrics-server-f79f97bbb-gpkgd" [ec65f4da-1a84-4dab-9969-3ed09e9fdce2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 14:20:38.725436 1860751 system_pods.go:61] "storage-provisioner" [72ed4f2a-f894-4246-8596-b02befc5fde4] Running
I0127 14:20:38.725448 1860751 system_pods.go:74] duration metric: took 183.756587ms to wait for pod list to return data ...
I0127 14:20:38.725461 1860751 default_sa.go:34] waiting for default service account to be created ...
I0127 14:20:38.916064 1860751 default_sa.go:45] found service account: "default"
I0127 14:20:38.916100 1860751 default_sa.go:55] duration metric: took 190.628425ms for default service account to be created ...
I0127 14:20:38.916114 1860751 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 14:20:39.121453 1860751 system_pods.go:87] 9 kube-system pods found
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4e12a41db7090 523cad1a4df73 24 seconds ago Exited dashboard-metrics-scraper 9 b7ea4c9b57361 dashboard-metrics-scraper-86c6bf9756-gn6tj
e89316ee54115 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 e5916a311dfe2 kubernetes-dashboard-7779f9b69b-9vnfn
8564a8569f15d c69fa2e9cbf5f 22 minutes ago Running coredns 0 ee19c1b73f8b7 coredns-668d6bf9bc-vn9c5
02d05ad52d05c c69fa2e9cbf5f 22 minutes ago Running coredns 0 a74ecf0c41ddc coredns-668d6bf9bc-52k8k
2679dfaab79eb 6e38f40d628db 22 minutes ago Running storage-provisioner 0 75280b90129b5 storage-provisioner
4f8f8d72b2d07 e29f9c7391fd9 22 minutes ago Running kube-proxy 0 0d4c8744a3479 kube-proxy-k2hsk
c27254f84098d a9e7e6b294baf 22 minutes ago Running etcd 2 ecf7616195575 etcd-embed-certs-635679
a8744e9c18072 2b0d6572d062c 22 minutes ago Running kube-scheduler 2 d39608420fc3a kube-scheduler-embed-certs-635679
07166050ad18d 95c0bda56fc4d 22 minutes ago Running kube-apiserver 2 39b18375d7684 kube-apiserver-embed-certs-635679
31762ba9c2652 019ee182b58e2 22 minutes ago Running kube-controller-manager 2 2df21f429ac69 kube-controller-manager-embed-certs-635679
==> containerd <==
Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.433157398Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.435132565Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 14:34:23 embed-certs-635679 containerd[557]: time="2025-01-27T14:34:23.435212103Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.430000400Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.452965831Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.455361289Z" level=info msg="StartContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.540154441Z" level=info msg="StartContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\" returns successfully"
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.577914291Z" level=info msg="shim disconnected" id=0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5 namespace=k8s.io
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.578044697Z" level=warning msg="cleaning up after shim disconnected" id=0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5 namespace=k8s.io
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.578083575Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.931304002Z" level=info msg="RemoveContainer for \"bdbc010c8f3f227025ff006fd718ab05bf4d2719e86be8eb09db45e97b58a869\""
Jan 27 14:35:00 embed-certs-635679 containerd[557]: time="2025-01-27T14:35:00.936656407Z" level=info msg="RemoveContainer for \"bdbc010c8f3f227025ff006fd718ab05bf4d2719e86be8eb09db45e97b58a869\" returns successfully"
Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.428932890Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.438738645Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.440721799Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 14:39:28 embed-certs-635679 containerd[557]: time="2025-01-27T14:39:28.440843069Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.431890104Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.455806626Z" level=info msg="CreateContainer within sandbox \"b7ea4c9b573618c10d13f34c3bae414e76a4629c47c0758826bf0242f75b3024\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\""
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.456950499Z" level=info msg="StartContainer for \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\""
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.553014507Z" level=info msg="StartContainer for \"4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a\" returns successfully"
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.591937618Z" level=info msg="shim disconnected" id=4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a namespace=k8s.io
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.592111341Z" level=warning msg="cleaning up after shim disconnected" id=4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a namespace=k8s.io
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.592356575Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.635612210Z" level=info msg="RemoveContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\""
Jan 27 14:40:08 embed-certs-635679 containerd[557]: time="2025-01-27T14:40:08.650069374Z" level=info msg="RemoveContainer for \"0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5\" returns successfully"
==> coredns [02d05ad52d05c752b9f96e3e4a9586474fabc31fe8aa2f02fa2e8320c6726089] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [8564a8569f15d671ca3ca1e9ad223e5c79149b078c634392de765621ba53192e] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: embed-certs-635679
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=embed-certs-635679
kubernetes.io/os=linux
minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
minikube.k8s.io/name=embed-certs-635679
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T14_18_19_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 14:18:13 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: embed-certs-635679
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 14:40:25 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 14:39:13 +0000 Mon, 27 Jan 2025 14:18:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 14:39:13 +0000 Mon, 27 Jan 2025 14:18:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 14:39:13 +0000 Mon, 27 Jan 2025 14:18:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 14:39:13 +0000 Mon, 27 Jan 2025 14:18:13 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.180
Hostname: embed-certs-635679
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 059fb273da1b414b9b09f7893653fab6
System UUID: 059fb273-da1b-414b-9b09-f7893653fab6
Boot ID: 153d3165-7d8f-4e48-9390-146221d081a0
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-52k8k 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 22m
kube-system coredns-668d6bf9bc-vn9c5 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 22m
kube-system etcd-embed-certs-635679 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 22m
kube-system kube-apiserver-embed-certs-635679 250m (12%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-controller-manager-embed-certs-635679 200m (10%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-proxy-k2hsk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system kube-scheduler-embed-certs-635679 100m (5%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system metrics-server-f79f97bbb-7xqnn 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 22m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-gn6tj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-9vnfn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 22m kube-proxy
Normal NodeHasSufficientMemory 22m (x8 over 22m) kubelet Node embed-certs-635679 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 22m (x8 over 22m) kubelet Node embed-certs-635679 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 22m (x7 over 22m) kubelet Node embed-certs-635679 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 22m kubelet Updated Node Allocatable limit across pods
Normal Starting 22m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 22m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 22m kubelet Node embed-certs-635679 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 22m kubelet Node embed-certs-635679 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 22m kubelet Node embed-certs-635679 status is now: NodeHasSufficientPID
Normal RegisteredNode 22m node-controller Node embed-certs-635679 event: Registered Node embed-certs-635679 in Controller
==> dmesg <==
[ +0.052702] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.039469] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.832448] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.028315] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.567880] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.329352] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
[ +0.066335] kauditd_printk_skb: 1 callbacks suppressed
[ +0.060068] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
[ +0.173743] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
[ +0.129967] systemd-fstab-generator[517]: Ignoring "noauto" option for root device
[ +0.267080] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
[ +1.043321] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
[ +2.681816] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
[ +0.861542] kauditd_printk_skb: 225 callbacks suppressed
[ +5.528743] kauditd_printk_skb: 74 callbacks suppressed
[Jan27 14:14] kauditd_printk_skb: 50 callbacks suppressed
[Jan27 14:18] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
[ +9.071293] systemd-fstab-generator[3380]: Ignoring "noauto" option for root device
[ +0.097025] kauditd_printk_skb: 87 callbacks suppressed
[ +5.368821] systemd-fstab-generator[3479]: Ignoring "noauto" option for root device
[ +0.131454] kauditd_printk_skb: 12 callbacks suppressed
[ +9.626612] kauditd_printk_skb: 112 callbacks suppressed
[ +5.388176] kauditd_printk_skb: 5 callbacks suppressed
==> etcd [c27254f84098d782fe3765ecd61ecd61651516518cdbb5be2f10ad3ed25f830d] <==
{"level":"info","ts":"2025-01-27T14:18:16.664282Z","caller":"traceutil/trace.go:171","msg":"trace[1914227506] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpointslice-controller; range_end:; response_count:0; response_revision:192; }","duration":"118.019577ms","start":"2025-01-27T14:18:16.546238Z","end":"2025-01-27T14:18:16.664257Z","steps":["trace[1914227506] 'range keys from in-memory index tree' (duration: 117.005003ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T14:18:16.839650Z","caller":"traceutil/trace.go:171","msg":"trace[359175703] transaction","detail":"{read_only:false; response_revision:194; number_of_response:1; }","duration":"104.245905ms","start":"2025-01-27T14:18:16.735388Z","end":"2025-01-27T14:18:16.839633Z","steps":["trace[359175703] 'process raft request' (duration: 60.823424ms)","trace[359175703] 'compare' (duration: 43.31828ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T14:18:17.101002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.858603ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960677556645753543 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:expand-controller\" value_size:655 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2025-01-27T14:18:17.101148Z","caller":"traceutil/trace.go:171","msg":"trace[1496340768] transaction","detail":"{read_only:false; response_revision:195; number_of_response:1; }","duration":"257.045679ms","start":"2025-01-27T14:18:16.844087Z","end":"2025-01-27T14:18:17.101133Z","steps":["trace[1496340768] 'process raft request' (duration: 116.990703ms)","trace[1496340768] 'compare' (duration: 139.656308ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T14:18:17.355387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.281034ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960677556645753547 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" value_size:679 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2025-01-27T14:18:17.356216Z","caller":"traceutil/trace.go:171","msg":"trace[1591653030] transaction","detail":"{read_only:false; response_revision:197; number_of_response:1; }","duration":"188.255255ms","start":"2025-01-27T14:18:17.167946Z","end":"2025-01-27T14:18:17.356202Z","steps":["trace[1591653030] 'process raft request' (duration: 59.11668ms)","trace[1591653030] 'compare' (duration: 128.067168ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T14:18:32.815049Z","caller":"traceutil/trace.go:171","msg":"trace[1609911900] transaction","detail":"{read_only:false; response_revision:516; number_of_response:1; }","duration":"491.055422ms","start":"2025-01-27T14:18:32.323214Z","end":"2025-01-27T14:18:32.814270Z","steps":["trace[1609911900] 'process raft request' (duration: 490.228832ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T14:18:32.815117Z","caller":"traceutil/trace.go:171","msg":"trace[1629388984] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"459.925355ms","start":"2025-01-27T14:18:32.354052Z","end":"2025-01-27T14:18:32.813978Z","steps":["trace[1629388984] 'read index received' (duration: 459.070682ms)","trace[1629388984] 'applied index is now lower than readState.Index' (duration: 854.047µs)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T14:18:32.815220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.154868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5\" limit:1 ","response":"range_response_count:1 size:5089"}
{"level":"info","ts":"2025-01-27T14:18:32.816256Z","caller":"traceutil/trace.go:171","msg":"trace[458862632] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5; range_end:; response_count:1; response_revision:516; }","duration":"462.214627ms","start":"2025-01-27T14:18:32.354005Z","end":"2025-01-27T14:18:32.816219Z","steps":["trace[458862632] 'agreement among raft nodes before linearized reading' (duration: 461.119042ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T14:18:32.816307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:18:32.353989Z","time spent":"462.295735ms","remote":"127.0.0.1:43614","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5112,"request content":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-vn9c5\" limit:1 "}
{"level":"warn","ts":"2025-01-27T14:18:32.817683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:18:32.323188Z","time spent":"492.956313ms","remote":"127.0.0.1:43596","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:515 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2025-01-27T14:18:37.550999Z","caller":"traceutil/trace.go:171","msg":"trace[960790695] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"118.341864ms","start":"2025-01-27T14:18:37.432645Z","end":"2025-01-27T14:18:37.550987Z","steps":["trace[960790695] 'process raft request' (duration: 117.893853ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T14:18:37.550655Z","caller":"traceutil/trace.go:171","msg":"trace[1211411304] linearizableReadLoop","detail":"{readStateIndex:551; appliedIndex:550; }","duration":"109.057389ms","start":"2025-01-27T14:18:37.441580Z","end":"2025-01-27T14:18:37.550638Z","steps":["trace[1211411304] 'read index received' (duration: 108.895203ms)","trace[1211411304] 'applied index is now lower than readState.Index' (duration: 161.668µs)"],"step_count":2}
{"level":"warn","ts":"2025-01-27T14:18:37.551366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.763911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-7xqnn.181e926dcb3ea080\" limit:1 ","response":"range_response_count:1 size:816"}
{"level":"info","ts":"2025-01-27T14:18:37.551397Z","caller":"traceutil/trace.go:171","msg":"trace[763898449] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-7xqnn.181e926dcb3ea080; range_end:; response_count:1; response_revision:535; }","duration":"109.831038ms","start":"2025-01-27T14:18:37.441555Z","end":"2025-01-27T14:18:37.551386Z","steps":["trace[763898449] 'agreement among raft nodes before linearized reading' (duration: 109.751773ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T14:28:11.119641Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
{"level":"info","ts":"2025-01-27T14:28:11.150038Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":862,"took":"28.958378ms","hash":1743789304,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2732032,"current-db-size-in-use":"2.7 MB"}
{"level":"info","ts":"2025-01-27T14:28:11.150287Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1743789304,"revision":862,"compact-revision":-1}
{"level":"info","ts":"2025-01-27T14:33:11.126237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1114}
{"level":"info","ts":"2025-01-27T14:33:11.130952Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1114,"took":"3.791102ms","hash":2148902225,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1720320,"current-db-size-in-use":"1.7 MB"}
{"level":"info","ts":"2025-01-27T14:33:11.131009Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2148902225,"revision":1114,"compact-revision":862}
{"level":"info","ts":"2025-01-27T14:38:11.132734Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1365}
{"level":"info","ts":"2025-01-27T14:38:11.136880Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1365,"took":"3.588783ms","hash":1689589143,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
{"level":"info","ts":"2025-01-27T14:38:11.136931Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1689589143,"revision":1365,"compact-revision":1114}
==> kernel <==
14:40:33 up 27 min, 0 users, load average: 0.54, 0.32, 0.20
Linux embed-certs-635679 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [07166050ad18d63f7fef1538dc5d308e0c070f26157a049882568876590f1878] <==
I0127 14:36:14.157532 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 14:36:14.158713 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 14:38:13.154661 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 14:38:13.155210 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 14:38:14.157108 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 14:38:14.157374 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0127 14:38:14.157589 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 14:38:14.157743 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 14:38:14.159004 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 14:38:14.159083 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 14:39:14.159882 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 14:39:14.160016 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0127 14:39:14.159887 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 14:39:14.160146 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 14:39:14.161356 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 14:39:14.161403 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [31762ba9c2652717502adc70a3218a8ce2c8cf94ccdacf92cd0e0351fbd946b7] <==
E0127 14:35:53.011711 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:35:53.068344 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:36:23.017633 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:36:23.076409 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:36:53.023468 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:36:53.084482 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:37:23.029445 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:37:23.091113 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:37:53.036733 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:37:53.099288 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:38:23.043820 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:38:23.108541 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 14:38:53.050599 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:38:53.115558 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 14:39:13.385113 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-635679"
E0127 14:39:23.057421 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:39:23.126144 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 14:39:42.444001 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="320.114µs"
E0127 14:39:53.064557 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:39:53.132668 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 14:39:57.444814 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="133.069µs"
I0127 14:40:08.646485 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="129.38µs"
I0127 14:40:16.043484 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="49.421µs"
E0127 14:40:23.071818 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 14:40:23.141260 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [4f8f8d72b2d07e8332023515af728edde6a649254bf14d8c2d86d5bdabe977e8] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 14:18:24.775570 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 14:18:24.793352 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.180"]
E0127 14:18:24.793481 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 14:18:24.871245 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 14:18:24.871300 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 14:18:24.871324 1 server_linux.go:170] "Using iptables Proxier"
I0127 14:18:24.873840 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 14:18:24.874110 1 server.go:497] "Version info" version="v1.32.1"
I0127 14:18:24.874136 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 14:18:24.875892 1 config.go:199] "Starting service config controller"
I0127 14:18:24.875939 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 14:18:24.875976 1 config.go:105] "Starting endpoint slice config controller"
I0127 14:18:24.875981 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 14:18:24.876667 1 config.go:329] "Starting node config controller"
I0127 14:18:24.876697 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 14:18:24.976223 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 14:18:24.976278 1 shared_informer.go:320] Caches are synced for service config
I0127 14:18:24.977586 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [a8744e9c180727296fd4ba21b613d2a9d24ba24eaa8f0f5e22a78aca756ef1c7] <==
W0127 14:18:14.514894 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0127 14:18:14.514927 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.545430 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 14:18:14.545480 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.611082 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0127 14:18:14.611185 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.620292 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 14:18:14.620512 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0127 14:18:14.686628 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0127 14:18:14.686900 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.694857 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 14:18:14.695087 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.696908 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 14:18:14.696950 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.702295 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 14:18:14.702317 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.853747 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 14:18:14.854077 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.866706 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 14:18:14.866994 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 14:18:14.881864 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 14:18:14.882119 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 14:18:16.189117 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0127 14:18:16.189165 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0127 14:18:17.764462 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.440992 3387 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.441135 3387 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.441463 3387 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r4q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-7xqnn_kube-system(2fae80e8-5118-461e-b160-d384bf083f0f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 27 14:39:28 embed-certs-635679 kubelet[3387]: E0127 14:39:28.442990 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
Jan 27 14:39:41 embed-certs-635679 kubelet[3387]: I0127 14:39:41.425052 3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
Jan 27 14:39:41 embed-certs-635679 kubelet[3387]: E0127 14:39:41.426126 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
Jan 27 14:39:42 embed-certs-635679 kubelet[3387]: E0127 14:39:42.425918 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
Jan 27 14:39:55 embed-certs-635679 kubelet[3387]: I0127 14:39:55.425731 3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
Jan 27 14:39:55 embed-certs-635679 kubelet[3387]: E0127 14:39:55.425987 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
Jan 27 14:39:57 embed-certs-635679 kubelet[3387]: E0127 14:39:57.426414 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.428129 3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: E0127 14:40:08.428844 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.627900 3387 scope.go:117] "RemoveContainer" containerID="0d35fca358e782ec00c00549c82131301c9d4c325c9dda59171043c6fc08e4c5"
Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: I0127 14:40:08.628355 3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
Jan 27 14:40:08 embed-certs-635679 kubelet[3387]: E0127 14:40:08.628581 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
Jan 27 14:40:16 embed-certs-635679 kubelet[3387]: I0127 14:40:16.028518 3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
Jan 27 14:40:16 embed-certs-635679 kubelet[3387]: E0127 14:40:16.028691 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: E0127 14:40:18.441960 3387 iptables.go:577] "Could not set up iptables canary" err=<
Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 27 14:40:18 embed-certs-635679 kubelet[3387]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 27 14:40:22 embed-certs-635679 kubelet[3387]: E0127 14:40:22.426510 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-7xqnn" podUID="2fae80e8-5118-461e-b160-d384bf083f0f"
Jan 27 14:40:28 embed-certs-635679 kubelet[3387]: I0127 14:40:28.425959 3387 scope.go:117] "RemoveContainer" containerID="4e12a41db7090d7917d0f8c57490c3603a1c4ac09068c76f2b0658d26374fc2a"
Jan 27 14:40:28 embed-certs-635679 kubelet[3387]: E0127 14:40:28.426132 3387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gn6tj_kubernetes-dashboard(0701808a-6bbc-4551-9fb3-3f5236257073)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gn6tj" podUID="0701808a-6bbc-4551-9fb3-3f5236257073"
==> kubernetes-dashboard [e89316ee54115ff814681a1206060ff283df15367f524eac34bd68ee628d2bf4] <==
2025/01/27 14:28:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:28:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:29:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:29:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:30:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:30:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:32:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:32:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:33:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:33:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:34:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:35:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:36:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:37:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:38:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:38:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:39:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:39:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 14:40:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [2679dfaab79eb703b9951ce9d7b7994254f7d475f6890c525e36e5fc8a5ee306] <==
I0127 14:18:26.002205 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 14:18:26.050084 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 14:18:26.056067 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 14:18:26.089361 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 14:18:26.089546 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe!
I0127 14:18:26.090621 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a68f20c-7a75-4920-9933-5237c6d16c12", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe became leader
I0127 14:18:26.495046 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-635679_b2f2fbb4-6ed8-4dd8-9e94-5065f87dcffe!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-635679 -n embed-certs-635679
helpers_test.go:261: (dbg) Run: kubectl --context embed-certs-635679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-7xqnn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn: exit status 1 (62.734928ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-7xqnn" not found
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-635679 describe pod metrics-server-f79f97bbb-7xqnn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1633.06s)