=== RUN TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0: signal: killed (25m38.549905556s)
-- stdout --
* [no-preload-677886] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20151
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "no-preload-677886" primary control-plane node in "no-preload-677886" cluster
* Restarting existing kvm2 VM for "no-preload-677886" ...
* Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-677886 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0120 12:24:56.456694 580663 out.go:345] Setting OutFile to fd 1 ...
I0120 12:24:56.456807 580663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:24:56.456819 580663 out.go:358] Setting ErrFile to fd 2...
I0120 12:24:56.456825 580663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:24:56.457135 580663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 12:24:56.457912 580663 out.go:352] Setting JSON to false
I0120 12:24:56.459154 580663 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7638,"bootTime":1737368258,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0120 12:24:56.459293 580663 start.go:139] virtualization: kvm guest
I0120 12:24:56.462566 580663 out.go:177] * [no-preload-677886] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0120 12:24:56.464284 580663 notify.go:220] Checking for updates...
I0120 12:24:56.464318 580663 out.go:177] - MINIKUBE_LOCATION=20151
I0120 12:24:56.465942 580663 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 12:24:56.467512 580663 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
I0120 12:24:56.469186 580663 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
I0120 12:24:56.471016 580663 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0120 12:24:56.472494 580663 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 12:24:56.474747 580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:24:56.475419 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:24:56.475515 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:24:56.496824 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
I0120 12:24:56.497392 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:24:56.498149 580663 main.go:141] libmachine: Using API Version 1
I0120 12:24:56.498177 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:24:56.498597 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:24:56.498857 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:24:56.499148 580663 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 12:24:56.499492 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:24:56.499559 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:24:56.516567 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
I0120 12:24:56.517028 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:24:56.517699 580663 main.go:141] libmachine: Using API Version 1
I0120 12:24:56.517733 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:24:56.518096 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:24:56.518340 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:24:56.563618 580663 out.go:177] * Using the kvm2 driver based on existing profile
I0120 12:24:56.565156 580663 start.go:297] selected driver: kvm2
I0120 12:24:56.565183 580663 start.go:901] validating driver "kvm2" against &{Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:24:56.565401 580663 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 12:24:56.566509 580663 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.566612 580663 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 12:24:56.585311 580663 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0120 12:24:56.585967 580663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:24:56.586027 580663 cni.go:84] Creating CNI manager for ""
I0120 12:24:56.586110 580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 12:24:56.586173 580663 start.go:340] cluster config:
{Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:24:56.586332 580663 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.588434 580663 out.go:177] * Starting "no-preload-677886" primary control-plane node in "no-preload-677886" cluster
I0120 12:24:56.589859 580663 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:24:56.590048 580663 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/config.json ...
I0120 12:24:56.590096 580663 cache.go:107] acquiring lock: {Name:mkb50d5c4959af228c3f0e841267fc713f5657bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590106 580663 cache.go:107] acquiring lock: {Name:mk7743765bee0171fb8408c07ab96f967c01da33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590194 580663 cache.go:107] acquiring lock: {Name:mkdd6761dcff9cb317bee6a39867dd9f91a1c9d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590199 580663 cache.go:107] acquiring lock: {Name:mkc3dcde5042d302783249c200b73a28b4207bfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590260 580663 cache.go:107] acquiring lock: {Name:mk801d27d0882d516653d3fd5808264aae328741 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590306 580663 cache.go:107] acquiring lock: {Name:mkcf6886d16e7a92b8a48ad7cc85e0173f8a2af5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590342 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
I0120 12:24:56.590342 580663 start.go:360] acquireMachinesLock for no-preload-677886: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 12:24:56.590353 580663 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0" took 96.435µs
I0120 12:24:56.590362 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 exists
I0120 12:24:56.590374 580663 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
I0120 12:24:56.590373 580663 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0" took 278.104µs
I0120 12:24:56.590289 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 exists
I0120 12:24:56.590386 580663 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
I0120 12:24:56.590354 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0120 12:24:56.590391 580663 start.go:364] duration metric: took 27.887µs to acquireMachinesLock for "no-preload-677886"
I0120 12:24:56.590392 580663 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0" took 198.872µs
I0120 12:24:56.590399 580663 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 96.603µs
I0120 12:24:56.590270 580663 cache.go:107] acquiring lock: {Name:mk8225973acaf0d36eacdfb4eba92b0ed26bdad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590407 580663 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0120 12:24:56.590409 580663 start.go:96] Skipping create...Using existing machine configuration
I0120 12:24:56.590418 580663 fix.go:54] fixHost starting:
I0120 12:24:56.590402 580663 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
I0120 12:24:56.590450 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
I0120 12:24:56.590457 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0120 12:24:56.590440 580663 cache.go:107] acquiring lock: {Name:mkb7510ccea43e6b11ab4abd1910eac7e5808368 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:24:56.590475 580663 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 388.372µs
I0120 12:24:56.590497 580663 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0120 12:24:56.590461 580663 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 195.047µs
I0120 12:24:56.590509 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 exists
I0120 12:24:56.590513 580663 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
I0120 12:24:56.590491 580663 cache.go:115] /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
I0120 12:24:56.590519 580663 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0" took 97.717µs
I0120 12:24:56.590528 580663 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
I0120 12:24:56.590541 580663 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 351.851µs
I0120 12:24:56.590559 580663 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20151-530330/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0120 12:24:56.590568 580663 cache.go:87] Successfully saved all images to host disk.
I0120 12:24:56.590843 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:24:56.590885 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:24:56.609126 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
I0120 12:24:56.609634 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:24:56.610330 580663 main.go:141] libmachine: Using API Version 1
I0120 12:24:56.610358 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:24:56.610688 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:24:56.610900 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:24:56.611061 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:24:56.613327 580663 fix.go:112] recreateIfNeeded on no-preload-677886: state=Stopped err=<nil>
I0120 12:24:56.613369 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
W0120 12:24:56.613547 580663 fix.go:138] unexpected machine state, will restart: <nil>
I0120 12:24:56.615784 580663 out.go:177] * Restarting existing kvm2 VM for "no-preload-677886" ...
I0120 12:24:56.617166 580663 main.go:141] libmachine: (no-preload-677886) Calling .Start
I0120 12:24:56.617403 580663 main.go:141] libmachine: (no-preload-677886) starting domain...
I0120 12:24:56.617428 580663 main.go:141] libmachine: (no-preload-677886) ensuring networks are active...
I0120 12:24:56.618493 580663 main.go:141] libmachine: (no-preload-677886) Ensuring network default is active
I0120 12:24:56.618996 580663 main.go:141] libmachine: (no-preload-677886) Ensuring network mk-no-preload-677886 is active
I0120 12:24:56.619551 580663 main.go:141] libmachine: (no-preload-677886) getting domain XML...
I0120 12:24:56.620569 580663 main.go:141] libmachine: (no-preload-677886) creating domain...
I0120 12:24:58.098571 580663 main.go:141] libmachine: (no-preload-677886) waiting for IP...
I0120 12:24:58.099691 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:24:58.100113 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:24:58.100379 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.100141 580698 retry.go:31] will retry after 196.998651ms: waiting for domain to come up
I0120 12:24:58.299005 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:24:58.299649 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:24:58.299683 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.299605 580698 retry.go:31] will retry after 315.24245ms: waiting for domain to come up
I0120 12:24:58.616292 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:24:58.616904 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:24:58.616939 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:58.616849 580698 retry.go:31] will retry after 406.941804ms: waiting for domain to come up
I0120 12:24:59.025591 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:24:59.026266 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:24:59.026295 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:59.026214 580698 retry.go:31] will retry after 583.374913ms: waiting for domain to come up
I0120 12:24:59.610886 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:24:59.611404 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:24:59.611431 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:24:59.611365 580698 retry.go:31] will retry after 580.640955ms: waiting for domain to come up
I0120 12:25:00.193188 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:00.193688 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:00.193721 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:00.193666 580698 retry.go:31] will retry after 767.186037ms: waiting for domain to come up
I0120 12:25:00.962901 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:00.963487 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:00.963557 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:00.963442 580698 retry.go:31] will retry after 784.374872ms: waiting for domain to come up
I0120 12:25:01.749153 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:01.749729 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:01.749762 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:01.749683 580698 retry.go:31] will retry after 985.496204ms: waiting for domain to come up
I0120 12:25:02.736982 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:02.737613 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:02.737645 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:02.737573 580698 retry.go:31] will retry after 1.287227851s: waiting for domain to come up
I0120 12:25:04.027162 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:04.027595 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:04.027641 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:04.027590 580698 retry.go:31] will retry after 2.033306338s: waiting for domain to come up
I0120 12:25:06.062268 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:06.062806 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:06.062834 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:06.062769 580698 retry.go:31] will retry after 2.791569905s: waiting for domain to come up
I0120 12:25:08.855885 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:08.856539 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:08.856567 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:08.856507 580698 retry.go:31] will retry after 2.690350592s: waiting for domain to come up
I0120 12:25:11.550477 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:11.551079 580663 main.go:141] libmachine: (no-preload-677886) DBG | unable to find current IP address of domain no-preload-677886 in network mk-no-preload-677886
I0120 12:25:11.551109 580663 main.go:141] libmachine: (no-preload-677886) DBG | I0120 12:25:11.551004 580698 retry.go:31] will retry after 3.84625692s: waiting for domain to come up
I0120 12:25:15.401681 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.402320 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has current primary IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.402366 580663 main.go:141] libmachine: (no-preload-677886) found domain IP: 192.168.72.157
I0120 12:25:15.402381 580663 main.go:141] libmachine: (no-preload-677886) reserving static IP address...
I0120 12:25:15.402786 580663 main.go:141] libmachine: (no-preload-677886) reserved static IP address 192.168.72.157 for domain no-preload-677886
I0120 12:25:15.402814 580663 main.go:141] libmachine: (no-preload-677886) waiting for SSH...
I0120 12:25:15.402835 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "no-preload-677886", mac: "52:54:00:3c:87:c0", ip: "192.168.72.157"} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.402860 580663 main.go:141] libmachine: (no-preload-677886) DBG | skip adding static IP to network mk-no-preload-677886 - found existing host DHCP lease matching {name: "no-preload-677886", mac: "52:54:00:3c:87:c0", ip: "192.168.72.157"}
I0120 12:25:15.402873 580663 main.go:141] libmachine: (no-preload-677886) DBG | Getting to WaitForSSH function...
I0120 12:25:15.405269 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.405604 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.405626 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.405761 580663 main.go:141] libmachine: (no-preload-677886) DBG | Using SSH client type: external
I0120 12:25:15.405775 580663 main.go:141] libmachine: (no-preload-677886) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa (-rw-------)
I0120 12:25:15.405832 580663 main.go:141] libmachine: (no-preload-677886) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 12:25:15.405857 580663 main.go:141] libmachine: (no-preload-677886) DBG | About to run SSH command:
I0120 12:25:15.405877 580663 main.go:141] libmachine: (no-preload-677886) DBG | exit 0
I0120 12:25:15.530526 580663 main.go:141] libmachine: (no-preload-677886) DBG | SSH cmd err, output: <nil>:
I0120 12:25:15.530944 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetConfigRaw
I0120 12:25:15.531629 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
I0120 12:25:15.534406 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.534911 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.534958 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.535249 580663 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/config.json ...
I0120 12:25:15.535471 580663 machine.go:93] provisionDockerMachine start ...
I0120 12:25:15.535490 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:15.535721 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:15.538459 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.538821 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.538844 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.539004 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:15.539194 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.539379 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.539551 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:15.539760 580663 main.go:141] libmachine: Using SSH client type: native
I0120 12:25:15.540012 580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.157 22 <nil> <nil>}
I0120 12:25:15.540025 580663 main.go:141] libmachine: About to run SSH command:
hostname
I0120 12:25:15.646562 580663 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0120 12:25:15.646591 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
I0120 12:25:15.646876 580663 buildroot.go:166] provisioning hostname "no-preload-677886"
I0120 12:25:15.646908 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
I0120 12:25:15.647130 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:15.650308 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.650669 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.650698 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.650879 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:15.651128 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.651342 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.651556 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:15.651786 580663 main.go:141] libmachine: Using SSH client type: native
I0120 12:25:15.652025 580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.157 22 <nil> <nil>}
I0120 12:25:15.652053 580663 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-677886 && echo "no-preload-677886" | sudo tee /etc/hostname
I0120 12:25:15.768606 580663 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-677886
I0120 12:25:15.768640 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:15.771694 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.772037 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.772087 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.772269 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:15.772467 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.772674 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:15.772805 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:15.772937 580663 main.go:141] libmachine: Using SSH client type: native
I0120 12:25:15.773113 580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.157 22 <nil> <nil>}
I0120 12:25:15.773128 580663 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-677886' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-677886/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-677886' | sudo tee -a /etc/hosts;
fi
fi
I0120 12:25:15.879097 580663 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 12:25:15.879135 580663 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
I0120 12:25:15.879160 580663 buildroot.go:174] setting up certificates
I0120 12:25:15.879175 580663 provision.go:84] configureAuth start
I0120 12:25:15.879203 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetMachineName
I0120 12:25:15.879546 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
I0120 12:25:15.882077 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.882472 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.882503 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.882635 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:15.884841 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.885175 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:15.885215 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:15.885392 580663 provision.go:143] copyHostCerts
I0120 12:25:15.885460 580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
I0120 12:25:15.885483 580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
I0120 12:25:15.885554 580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
I0120 12:25:15.885685 580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
I0120 12:25:15.885695 580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
I0120 12:25:15.885727 580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
I0120 12:25:15.885830 580663 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
I0120 12:25:15.885840 580663 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
I0120 12:25:15.885869 580663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
I0120 12:25:15.885949 580663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.no-preload-677886 san=[127.0.0.1 192.168.72.157 localhost minikube no-preload-677886]
I0120 12:25:16.005597 580663 provision.go:177] copyRemoteCerts
I0120 12:25:16.005691 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 12:25:16.005730 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:16.008891 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.009345 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.009389 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.009623 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:16.009837 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.010002 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:16.010130 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:25:16.099759 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0120 12:25:16.130170 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 12:25:16.157604 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 12:25:16.187763 580663 provision.go:87] duration metric: took 308.558766ms to configureAuth
I0120 12:25:16.187795 580663 buildroot.go:189] setting minikube options for container-runtime
I0120 12:25:16.188011 580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:25:16.188031 580663 machine.go:96] duration metric: took 652.54508ms to provisionDockerMachine
I0120 12:25:16.188043 580663 start.go:293] postStartSetup for "no-preload-677886" (driver="kvm2")
I0120 12:25:16.188057 580663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 12:25:16.188094 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:16.188456 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 12:25:16.188498 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:16.191394 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.191712 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.191751 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.191878 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:16.192087 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.192265 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:16.192419 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:25:16.277146 580663 ssh_runner.go:195] Run: cat /etc/os-release
I0120 12:25:16.282163 580663 info.go:137] Remote host: Buildroot 2023.02.9
I0120 12:25:16.282202 580663 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
I0120 12:25:16.282264 580663 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
I0120 12:25:16.282348 580663 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
I0120 12:25:16.282491 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 12:25:16.292957 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
I0120 12:25:16.323353 580663 start.go:296] duration metric: took 135.288428ms for postStartSetup
I0120 12:25:16.323414 580663 fix.go:56] duration metric: took 19.732994766s for fixHost
I0120 12:25:16.323444 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:16.326291 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.326728 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.326762 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.326921 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:16.327120 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.327275 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.327441 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:16.327645 580663 main.go:141] libmachine: Using SSH client type: native
I0120 12:25:16.327894 580663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.157 22 <nil> <nil>}
I0120 12:25:16.327909 580663 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 12:25:16.435263 580663 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737375916.389485996
I0120 12:25:16.435316 580663 fix.go:216] guest clock: 1737375916.389485996
I0120 12:25:16.435327 580663 fix.go:229] Guest: 2025-01-20 12:25:16.389485996 +0000 UTC Remote: 2025-01-20 12:25:16.323419583 +0000 UTC m=+19.915192404 (delta=66.066413ms)
I0120 12:25:16.435358 580663 fix.go:200] guest clock delta is within tolerance: 66.066413ms
I0120 12:25:16.435365 580663 start.go:83] releasing machines lock for "no-preload-677886", held for 19.844964569s
I0120 12:25:16.435397 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:16.435687 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
I0120 12:25:16.438862 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.439261 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.439292 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.439707 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:16.440382 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:16.440600 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:25:16.440714 580663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 12:25:16.440777 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:16.440934 580663 ssh_runner.go:195] Run: cat /version.json
I0120 12:25:16.440970 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:25:16.444124 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.444356 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.444539 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.444579 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.444741 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:16.444760 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:16.444767 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:16.444974 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:25:16.445026 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.445153 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:25:16.445206 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:16.445412 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:25:16.445429 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:25:16.445622 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:25:16.523376 580663 ssh_runner.go:195] Run: systemctl --version
I0120 12:25:16.551805 580663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 12:25:16.560103 580663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 12:25:16.560184 580663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 12:25:16.585768 580663 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 12:25:16.585821 580663 start.go:495] detecting cgroup driver to use...
I0120 12:25:16.585918 580663 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 12:25:16.619412 580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 12:25:16.634018 580663 docker.go:217] disabling cri-docker service (if available) ...
I0120 12:25:16.634091 580663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 12:25:16.650862 580663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 12:25:16.667222 580663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 12:25:16.827621 580663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 12:25:16.997836 580663 docker.go:233] disabling docker service ...
I0120 12:25:16.997920 580663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 12:25:17.012952 580663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 12:25:17.033066 580663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 12:25:17.184785 580663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 12:25:17.308240 580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 12:25:17.323018 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 12:25:17.346117 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 12:25:17.362604 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 12:25:17.377268 580663 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 12:25:17.377358 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 12:25:17.389938 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:25:17.401504 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 12:25:17.412628 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:25:17.423600 580663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 12:25:17.434784 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 12:25:17.446433 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 12:25:17.457770 580663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 12:25:17.470005 580663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 12:25:17.480134 580663 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 12:25:17.480204 580663 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 12:25:17.495835 580663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 12:25:17.506603 580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:25:17.647336 580663 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 12:25:17.679291 580663 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 12:25:17.679405 580663 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:25:17.684614 580663 retry.go:31] will retry after 596.77903ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0120 12:25:18.282567 580663 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:25:18.288477 580663 start.go:563] Will wait 60s for crictl version
I0120 12:25:18.288558 580663 ssh_runner.go:195] Run: which crictl
I0120 12:25:18.293095 580663 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 12:25:18.339384 580663 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0120 12:25:18.339513 580663 ssh_runner.go:195] Run: containerd --version
I0120 12:25:18.371062 580663 ssh_runner.go:195] Run: containerd --version
I0120 12:25:18.401306 580663 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
I0120 12:25:18.402946 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetIP
I0120 12:25:18.406062 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:18.406509 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:25:18.406529 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:25:18.406815 580663 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0120 12:25:18.411947 580663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:25:18.425347 580663 kubeadm.go:883] updating cluster {Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 12:25:18.425473 580663 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:25:18.425516 580663 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:25:18.462916 580663 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:25:18.462946 580663 cache_images.go:84] Images are preloaded, skipping loading
I0120 12:25:18.462957 580663 kubeadm.go:934] updating node { 192.168.72.157 8443 v1.32.0 containerd true true} ...
I0120 12:25:18.463086 580663 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-677886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 12:25:18.463159 580663 ssh_runner.go:195] Run: sudo crictl info
I0120 12:25:18.499236 580663 cni.go:84] Creating CNI manager for ""
I0120 12:25:18.499264 580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 12:25:18.499280 580663 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 12:25:18.499310 580663 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-677886 NodeName:no-preload-677886 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 12:25:18.499474 580663 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.157
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-677886"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.157"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 12:25:18.499563 580663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 12:25:18.510525 580663 binaries.go:44] Found k8s binaries, skipping transfer
I0120 12:25:18.510642 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 12:25:18.524295 580663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0120 12:25:18.543425 580663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 12:25:18.561360 580663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
I0120 12:25:18.581273 580663 ssh_runner.go:195] Run: grep 192.168.72.157 control-plane.minikube.internal$ /etc/hosts
I0120 12:25:18.593128 580663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:25:18.606167 580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:25:18.729737 580663 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:25:18.753136 580663 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886 for IP: 192.168.72.157
I0120 12:25:18.753159 580663 certs.go:194] generating shared ca certs ...
I0120 12:25:18.753178 580663 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:25:18.753337 580663 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
I0120 12:25:18.753395 580663 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
I0120 12:25:18.753409 580663 certs.go:256] generating profile certs ...
I0120 12:25:18.753519 580663 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/client.key
I0120 12:25:18.753605 580663 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.key.8959decb
I0120 12:25:18.753660 580663 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.key
I0120 12:25:18.753790 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
W0120 12:25:18.753853 580663 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
I0120 12:25:18.753869 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
I0120 12:25:18.753902 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
I0120 12:25:18.753934 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
I0120 12:25:18.753966 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
I0120 12:25:18.754031 580663 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
I0120 12:25:18.755002 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 12:25:18.810090 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 12:25:18.853328 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 12:25:18.889238 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 12:25:18.925460 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0120 12:25:18.962448 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 12:25:18.999369 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 12:25:19.032057 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/no-preload-677886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 12:25:19.061632 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 12:25:19.091446 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
I0120 12:25:19.118422 580663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
I0120 12:25:19.143431 580663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 12:25:19.162253 580663 ssh_runner.go:195] Run: openssl version
I0120 12:25:19.168374 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
I0120 12:25:19.180856 580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
I0120 12:25:19.185868 580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
I0120 12:25:19.185929 580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
I0120 12:25:19.192441 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
I0120 12:25:19.205064 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
I0120 12:25:19.221620 580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
I0120 12:25:19.227409 580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
I0120 12:25:19.227483 580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
I0120 12:25:19.235639 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
I0120 12:25:19.247521 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 12:25:19.259669 580663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 12:25:19.265367 580663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
I0120 12:25:19.265458 580663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 12:25:19.272666 580663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 12:25:19.286126 580663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 12:25:19.291058 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 12:25:19.297354 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 12:25:19.303419 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 12:25:19.310027 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 12:25:19.317795 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 12:25:19.325533 580663 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 12:25:19.331891 580663 kubeadm.go:392] StartCluster: {Name:no-preload-677886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-677886 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:25:19.332000 580663 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 12:25:19.332050 580663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 12:25:19.374646 580663 cri.go:89] found id: "bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323"
I0120 12:25:19.374676 580663 cri.go:89] found id: "8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7"
I0120 12:25:19.374679 580663 cri.go:89] found id: "eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9"
I0120 12:25:19.374682 580663 cri.go:89] found id: "222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47"
I0120 12:25:19.374685 580663 cri.go:89] found id: "b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b"
I0120 12:25:19.374688 580663 cri.go:89] found id: "7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1"
I0120 12:25:19.374691 580663 cri.go:89] found id: "6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a"
I0120 12:25:19.374694 580663 cri.go:89] found id: "5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd"
I0120 12:25:19.374696 580663 cri.go:89] found id: ""
I0120 12:25:19.374743 580663 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 12:25:19.391060 580663 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T12:25:19Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 12:25:19.391199 580663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 12:25:19.402737 580663 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 12:25:19.402763 580663 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 12:25:19.402827 580663 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 12:25:19.414851 580663 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 12:25:19.416024 580663 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-677886" does not appear in /home/jenkins/minikube-integration/20151-530330/kubeconfig
I0120 12:25:19.416621 580663 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-530330/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-677886" cluster setting kubeconfig missing "no-preload-677886" context setting]
I0120 12:25:19.417328 580663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:25:19.419325 580663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 12:25:19.430541 580663 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.157
I0120 12:25:19.430579 580663 kubeadm.go:1160] stopping kube-system containers ...
I0120 12:25:19.430599 580663 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0120 12:25:19.430659 580663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 12:25:19.487663 580663 cri.go:89] found id: "bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323"
I0120 12:25:19.487695 580663 cri.go:89] found id: "8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7"
I0120 12:25:19.487702 580663 cri.go:89] found id: "eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9"
I0120 12:25:19.487707 580663 cri.go:89] found id: "222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47"
I0120 12:25:19.487712 580663 cri.go:89] found id: "b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b"
I0120 12:25:19.487717 580663 cri.go:89] found id: "7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1"
I0120 12:25:19.487721 580663 cri.go:89] found id: "6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a"
I0120 12:25:19.487725 580663 cri.go:89] found id: "5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd"
I0120 12:25:19.487729 580663 cri.go:89] found id: ""
I0120 12:25:19.487736 580663 cri.go:252] Stopping containers: [bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323 8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7 eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9 222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47 b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b 7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1 6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a 5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd]
I0120 12:25:19.487797 580663 ssh_runner.go:195] Run: which crictl
I0120 12:25:19.492093 580663 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 bfa57f7d3617765871ed0a201c2d868cd3eb881ec9ee84e3798741d3978fa323 8889bf48964a56ae72872b426a20a0932902b06c15b99a3de1e3c8ba1b04bfc7 eea6f34f583b30cc484060412a97a0bc3c95afa3df578ef1e07bd6b681bc54a9 222c27052e1031adbafb6744b25584f19a0a8dc63a205d42dc5f229d1ff60e47 b4c070e430ab49b25459e2a02d6216019116c10d64063980782cb619692fd16b 7509889b1127538ae07b5ab56638d11a21a970fc8972875f72e92215b2ace3c1 6c9ede0755ce5eb4cd9c9c3289194d11bbfeb9a9706f0fb91fd6c48f8b86d94a 5db0fac76c1d13f3d7b5654b5e72844e7f50678029ad9fd190b65870619d03fd
I0120 12:25:19.531809 580663 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0120 12:25:19.549013 580663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 12:25:19.563634 580663 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 12:25:19.563664 580663 kubeadm.go:157] found existing configuration files:
I0120 12:25:19.563724 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 12:25:19.576840 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 12:25:19.576904 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 12:25:19.591965 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 12:25:19.602797 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 12:25:19.602868 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 12:25:19.616597 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 12:25:19.629930 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 12:25:19.630018 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 12:25:19.643805 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 12:25:19.656962 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 12:25:19.657040 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 12:25:19.671375 580663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 12:25:19.685780 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:19.836161 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:20.692199 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:20.897505 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:20.970999 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:21.088635 580663 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:25:21.088732 580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:25:21.589031 580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:25:22.088913 580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:25:22.121572 580663 api_server.go:72] duration metric: took 1.032934898s to wait for apiserver process to appear ...
I0120 12:25:22.121609 580663 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:25:22.121635 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:22.122270 580663 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
I0120 12:25:22.621890 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:25.087924 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 12:25:25.087959 580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 12:25:25.087981 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:25.116120 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 12:25:25.116148 580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 12:25:25.122385 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:25.193884 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 12:25:25.193938 580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 12:25:25.622588 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:25.627048 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 12:25:25.627072 580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 12:25:26.121711 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:26.131103 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 12:25:26.131131 580663 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 12:25:26.621857 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:25:26.630738 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
ok
I0120 12:25:26.641660 580663 api_server.go:141] control plane version: v1.32.0
I0120 12:25:26.641688 580663 api_server.go:131] duration metric: took 4.520071397s to wait for apiserver health ...
I0120 12:25:26.641697 580663 cni.go:84] Creating CNI manager for ""
I0120 12:25:26.641703 580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 12:25:26.643494 580663 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 12:25:26.645193 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 12:25:26.665039 580663 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 12:25:26.693649 580663 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 12:25:26.703765 580663 system_pods.go:59] 8 kube-system pods found
I0120 12:25:26.703803 580663 system_pods.go:61] "coredns-668d6bf9bc-zb8zw" [76792e2d-784e-40bd-8f41-dff4f5d2a000] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 12:25:26.703815 580663 system_pods.go:61] "etcd-no-preload-677886" [19c08e3a-a730-4dc7-a415-241f04c62e96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0120 12:25:26.703833 580663 system_pods.go:61] "kube-apiserver-no-preload-677886" [dba4da15-817f-4cd9-9cf6-3b86c494c7d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0120 12:25:26.703851 580663 system_pods.go:61] "kube-controller-manager-no-preload-677886" [3010b348-847c-4c27-b60d-d69f8a145886] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0120 12:25:26.703860 580663 system_pods.go:61] "kube-proxy-9xrpd" [70e7b10c-60c6-4667-8ba2-76f7cd4857ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0120 12:25:26.703873 580663 system_pods.go:61] "kube-scheduler-no-preload-677886" [3788a16c-16fb-413a-a6e2-2e9a4e4d86ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0120 12:25:26.703884 580663 system_pods.go:61] "metrics-server-f79f97bbb-6hgwn" [96b61173-8260-4d4c-b87a-1fbeacc5e0e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 12:25:26.703894 580663 system_pods.go:61] "storage-provisioner" [f9580e57-1600-4be5-a8a6-c56d510ced4c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0120 12:25:26.703903 580663 system_pods.go:74] duration metric: took 10.231015ms to wait for pod list to return data ...
I0120 12:25:26.703913 580663 node_conditions.go:102] verifying NodePressure condition ...
I0120 12:25:26.709233 580663 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 12:25:26.709263 580663 node_conditions.go:123] node cpu capacity is 2
I0120 12:25:26.709276 580663 node_conditions.go:105] duration metric: took 5.355597ms to run NodePressure ...
I0120 12:25:26.709295 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0120 12:25:27.005262 580663 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0120 12:25:27.009537 580663 kubeadm.go:739] kubelet initialised
I0120 12:25:27.009557 580663 kubeadm.go:740] duration metric: took 4.264831ms waiting for restarted kubelet to initialise ...
I0120 12:25:27.009565 580663 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:25:27.013597 580663 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace to be "Ready" ...
I0120 12:25:29.020427 580663 pod_ready.go:103] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:31.021432 580663 pod_ready.go:103] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:32.020160 580663 pod_ready.go:93] pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:32.020186 580663 pod_ready.go:82] duration metric: took 5.00656531s for pod "coredns-668d6bf9bc-zb8zw" in "kube-system" namespace to be "Ready" ...
I0120 12:25:32.020197 580663 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:34.026830 580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:36.027008 580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:38.027568 580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:40.529616 580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:41.527275 580663 pod_ready.go:93] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:41.527298 580663 pod_ready.go:82] duration metric: took 9.507094464s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.527308 580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.532202 580663 pod_ready.go:93] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:41.532228 580663 pod_ready.go:82] duration metric: took 4.913239ms for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.532238 580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.536384 580663 pod_ready.go:93] pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:41.536403 580663 pod_ready.go:82] duration metric: took 4.158471ms for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.536411 580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9xrpd" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.540413 580663 pod_ready.go:93] pod "kube-proxy-9xrpd" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:41.540430 580663 pod_ready.go:82] duration metric: took 4.014364ms for pod "kube-proxy-9xrpd" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.540438 580663 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.544348 580663 pod_ready.go:93] pod "kube-scheduler-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:25:41.544368 580663 pod_ready.go:82] duration metric: took 3.923918ms for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:25:41.544377 580663 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" ...
I0120 12:25:43.551462 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:46.052740 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:48.053396 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:50.551084 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:52.553112 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:55.051232 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:57.051510 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:25:59.055844 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:01.553091 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:04.051451 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:06.051745 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:08.051926 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:10.058147 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:12.552173 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:14.553469 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:17.051972 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:19.052257 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:21.551553 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:23.552130 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:26.051383 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:28.549742 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:30.551885 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:32.556125 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:35.054623 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:37.551532 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:39.551592 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:41.553899 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:44.050895 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:46.552836 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:48.553470 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:50.554840 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:53.053470 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:55.552983 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:26:58.054576 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:00.552438 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:02.554035 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:05.051995 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:07.053250 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:09.551608 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:12.052171 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:14.551916 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:16.553013 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:19.052605 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:21.553751 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:24.054433 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:26.551663 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:29.052843 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:31.053282 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:33.550594 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:35.551150 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:37.551800 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:40.050932 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:42.550828 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:44.551516 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:46.552551 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:49.051597 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:51.550614 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:53.550923 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:56.050037 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:27:58.051436 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:00.051609 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:02.551345 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:04.551710 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:07.051565 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:09.551406 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:12.051287 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:14.051571 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:16.550571 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:18.551384 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:21.052345 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:23.052988 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:25.552160 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:27.553119 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:30.052514 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:32.052597 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:34.550382 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:36.554593 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:39.052292 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:41.551156 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:43.552839 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:46.051011 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:48.051793 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:50.051883 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:52.052625 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:54.552862 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:56.596014 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:28:59.052473 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:01.053068 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:03.053535 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:05.551774 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:08.051998 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:10.052549 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:12.551545 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:15.052148 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:17.551185 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:19.552734 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:22.051159 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:24.053498 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:26.552235 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:29.051004 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:31.051485 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:33.551037 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:35.551680 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:38.051626 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:40.051943 580663 pod_ready.go:103] pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:41.544634 580663 pod_ready.go:82] duration metric: took 4m0.00023314s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" ...
E0120 12:29:41.544663 580663 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-6hgwn" in "kube-system" namespace to be "Ready" (will not retry!)
I0120 12:29:41.544691 580663 pod_ready.go:39] duration metric: took 4m14.535115442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:29:41.544734 580663 kubeadm.go:597] duration metric: took 4m22.141964379s to restartPrimaryControlPlane
W0120 12:29:41.544823 580663 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0120 12:29:41.544859 580663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0120 12:29:43.325105 580663 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.780216325s)
I0120 12:29:43.325179 580663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 12:29:43.340601 580663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 12:29:43.352006 580663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 12:29:43.363189 580663 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 12:29:43.363210 580663 kubeadm.go:157] found existing configuration files:
I0120 12:29:43.363265 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 12:29:43.375237 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 12:29:43.375301 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 12:29:43.391031 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 12:29:43.401786 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 12:29:43.401871 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 12:29:43.413048 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 12:29:43.423854 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 12:29:43.423932 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 12:29:43.434619 580663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 12:29:43.444908 580663 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 12:29:43.444978 580663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 12:29:43.455919 580663 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0120 12:29:43.503019 580663 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 12:29:43.503090 580663 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 12:29:43.620840 580663 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 12:29:43.621013 580663 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 12:29:43.621138 580663 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 12:29:43.628035 580663 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 12:29:43.630145 580663 out.go:235] - Generating certificates and keys ...
I0120 12:29:43.630283 580663 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 12:29:43.630755 580663 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 12:29:43.630887 580663 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0120 12:29:43.631240 580663 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0120 12:29:43.631487 580663 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0120 12:29:43.631849 580663 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0120 12:29:43.632017 580663 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0120 12:29:43.632153 580663 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0120 12:29:43.632634 580663 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0120 12:29:43.632734 580663 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0120 12:29:43.632900 580663 kubeadm.go:310] [certs] Using the existing "sa" key
I0120 12:29:43.632993 580663 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 12:29:43.958312 580663 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 12:29:44.044087 580663 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 12:29:44.320019 580663 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 12:29:44.451393 580663 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 12:29:44.716527 580663 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 12:29:44.717392 580663 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 12:29:44.721542 580663 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 12:29:44.723747 580663 out.go:235] - Booting up control plane ...
I0120 12:29:44.723867 580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 12:29:44.724452 580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 12:29:44.727368 580663 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 12:29:44.749031 580663 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 12:29:44.757092 580663 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 12:29:44.757174 580663 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 12:29:44.921783 580663 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 12:29:44.921993 580663 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 12:29:45.922247 580663 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001241539s
I0120 12:29:45.922381 580663 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 12:29:50.926135 580663 kubeadm.go:310] [api-check] The API server is healthy after 5.002210497s
I0120 12:29:50.937294 580663 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 12:29:50.956725 580663 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 12:29:51.003153 580663 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 12:29:51.003451 580663 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-677886 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 12:29:51.022662 580663 kubeadm.go:310] [bootstrap-token] Using token: yujpfs.k6ck90dtmo1yxa66
I0120 12:29:51.024781 580663 out.go:235] - Configuring RBAC rules ...
I0120 12:29:51.024951 580663 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 12:29:51.037177 580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 12:29:51.051029 580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 12:29:51.060737 580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 12:29:51.066857 580663 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 12:29:51.073422 580663 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 12:29:51.331992 580663 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 12:29:51.780375 580663 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 12:29:52.331230 580663 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 12:29:52.333488 580663 kubeadm.go:310]
I0120 12:29:52.333590 580663 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 12:29:52.333620 580663 kubeadm.go:310]
I0120 12:29:52.333712 580663 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 12:29:52.333718 580663 kubeadm.go:310]
I0120 12:29:52.333740 580663 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 12:29:52.333797 580663 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 12:29:52.333881 580663 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 12:29:52.333892 580663 kubeadm.go:310]
I0120 12:29:52.333985 580663 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 12:29:52.334006 580663 kubeadm.go:310]
I0120 12:29:52.334077 580663 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 12:29:52.334089 580663 kubeadm.go:310]
I0120 12:29:52.334158 580663 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 12:29:52.334276 580663 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 12:29:52.334381 580663 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 12:29:52.334403 580663 kubeadm.go:310]
I0120 12:29:52.334505 580663 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 12:29:52.334611 580663 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 12:29:52.334628 580663 kubeadm.go:310]
I0120 12:29:52.334741 580663 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yujpfs.k6ck90dtmo1yxa66 \
I0120 12:29:52.334875 580663 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
I0120 12:29:52.334907 580663 kubeadm.go:310] --control-plane
I0120 12:29:52.334917 580663 kubeadm.go:310]
I0120 12:29:52.335036 580663 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 12:29:52.335047 580663 kubeadm.go:310]
I0120 12:29:52.335155 580663 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yujpfs.k6ck90dtmo1yxa66 \
I0120 12:29:52.335306 580663 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3
I0120 12:29:52.336641 580663 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 12:29:52.336671 580663 cni.go:84] Creating CNI manager for ""
I0120 12:29:52.336684 580663 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 12:29:52.337989 580663 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 12:29:52.339338 580663 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 12:29:52.359963 580663 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 12:29:52.385108 580663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 12:29:52.385173 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:52.385187 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-677886 minikube.k8s.io/updated_at=2025_01_20T12_29_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=no-preload-677886 minikube.k8s.io/primary=true
I0120 12:29:52.700612 580663 ops.go:34] apiserver oom_adj: -16
I0120 12:29:52.700716 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:53.201614 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:53.700980 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:54.200936 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:54.700963 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:55.200993 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:55.701788 580663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:29:55.818494 580663 kubeadm.go:1113] duration metric: took 3.433386907s to wait for elevateKubeSystemPrivileges
I0120 12:29:55.818535 580663 kubeadm.go:394] duration metric: took 4m36.486654712s to StartCluster
I0120 12:29:55.818555 580663 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:29:55.818636 580663 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20151-530330/kubeconfig
I0120 12:29:55.820492 580663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:29:55.827906 580663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:29:55.828002 580663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 12:29:55.828108 580663 addons.go:69] Setting storage-provisioner=true in profile "no-preload-677886"
I0120 12:29:55.828131 580663 addons.go:238] Setting addon storage-provisioner=true in "no-preload-677886"
W0120 12:29:55.828140 580663 addons.go:247] addon storage-provisioner should already be in state true
I0120 12:29:55.828129 580663 addons.go:69] Setting default-storageclass=true in profile "no-preload-677886"
I0120 12:29:55.828162 580663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-677886"
I0120 12:29:55.828176 580663 host.go:66] Checking if "no-preload-677886" exists ...
I0120 12:29:55.828226 580663 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:29:55.828302 580663 addons.go:69] Setting dashboard=true in profile "no-preload-677886"
I0120 12:29:55.828321 580663 addons.go:238] Setting addon dashboard=true in "no-preload-677886"
W0120 12:29:55.828332 580663 addons.go:247] addon dashboard should already be in state true
I0120 12:29:55.828362 580663 host.go:66] Checking if "no-preload-677886" exists ...
I0120 12:29:55.828680 580663 addons.go:69] Setting metrics-server=true in profile "no-preload-677886"
I0120 12:29:55.828718 580663 addons.go:238] Setting addon metrics-server=true in "no-preload-677886"
W0120 12:29:55.828727 580663 addons.go:247] addon metrics-server should already be in state true
I0120 12:29:55.828729 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.828758 580663 host.go:66] Checking if "no-preload-677886" exists ...
I0120 12:29:55.828773 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.828790 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.828838 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.829142 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.829171 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.829387 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.829436 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.829964 580663 out.go:177] * Verifying Kubernetes components...
I0120 12:29:55.831634 580663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:29:55.847394 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
I0120 12:29:55.847867 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.848446 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.848460 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.848917 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.849092 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:29:55.849662 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
I0120 12:29:55.850208 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.850763 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
I0120 12:29:55.850852 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.850870 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.851450 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.851563 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.852783 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
I0120 12:29:55.852911 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.852937 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.853255 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.853349 580663 addons.go:238] Setting addon default-storageclass=true in "no-preload-677886"
W0120 12:29:55.853358 580663 addons.go:247] addon default-storageclass should already be in state true
I0120 12:29:55.853380 580663 host.go:66] Checking if "no-preload-677886" exists ...
I0120 12:29:55.853603 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.853624 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.854076 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.854097 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.854357 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.854370 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.854666 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.854733 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.855063 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.855086 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.855572 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.855613 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.871877 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
I0120 12:29:55.872268 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
I0120 12:29:55.872468 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.872568 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.873006 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.873030 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.873167 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.873181 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.873318 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.873451 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:29:55.873499 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.874038 580663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:29:55.874080 580663 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:29:55.875018 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:29:55.877132 580663 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 12:29:55.877504 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
I0120 12:29:55.877895 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.878401 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.878420 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.878706 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.878882 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:29:55.879913 580663 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 12:29:55.880337 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:29:55.881391 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 12:29:55.881407 580663 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 12:29:55.881438 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:29:55.882243 580663 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 12:29:55.883861 580663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:29:55.883881 580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 12:29:55.883898 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:29:55.885880 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.886344 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:29:55.886373 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.887207 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.887242 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:29:55.887401 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:29:55.887748 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:29:55.887820 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
I0120 12:29:55.887996 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:29:55.888347 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:29:55.888359 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.888385 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.888584 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:29:55.888739 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:29:55.888859 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:29:55.888974 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:29:55.889346 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.889369 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.889703 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.890041 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:29:55.891415 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:29:55.893032 580663 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 12:29:55.894459 580663 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 12:29:55.894480 580663 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 12:29:55.894500 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:29:55.897523 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.897980 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:29:55.897996 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.898142 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:29:55.898751 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:29:55.898981 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:29:55.899163 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:29:55.906419 580663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
I0120 12:29:55.906839 580663 main.go:141] libmachine: () Calling .GetVersion
I0120 12:29:55.907284 580663 main.go:141] libmachine: Using API Version 1
I0120 12:29:55.907303 580663 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:29:55.907783 580663 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:29:55.907939 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetState
I0120 12:29:55.909544 580663 main.go:141] libmachine: (no-preload-677886) Calling .DriverName
I0120 12:29:55.909819 580663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 12:29:55.909838 580663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 12:29:55.909858 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHHostname
I0120 12:29:55.912395 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.912786 580663 main.go:141] libmachine: (no-preload-677886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:87:c0", ip: ""} in network mk-no-preload-677886: {Iface:virbr4 ExpiryTime:2025-01-20 13:21:44 +0000 UTC Type:0 Mac:52:54:00:3c:87:c0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-677886 Clientid:01:52:54:00:3c:87:c0}
I0120 12:29:55.912812 580663 main.go:141] libmachine: (no-preload-677886) DBG | domain no-preload-677886 has defined IP address 192.168.72.157 and MAC address 52:54:00:3c:87:c0 in network mk-no-preload-677886
I0120 12:29:55.912976 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHPort
I0120 12:29:55.913163 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHKeyPath
I0120 12:29:55.913339 580663 main.go:141] libmachine: (no-preload-677886) Calling .GetSSHUsername
I0120 12:29:55.913459 580663 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/no-preload-677886/id_rsa Username:docker}
I0120 12:29:56.070157 580663 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:29:56.091475 580663 node_ready.go:35] waiting up to 6m0s for node "no-preload-677886" to be "Ready" ...
I0120 12:29:56.116298 580663 node_ready.go:49] node "no-preload-677886" has status "Ready":"True"
I0120 12:29:56.116329 580663 node_ready.go:38] duration metric: took 24.817971ms for node "no-preload-677886" to be "Ready" ...
I0120 12:29:56.116344 580663 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:29:56.122752 580663 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:29:56.163838 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 12:29:56.163872 580663 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 12:29:56.176791 580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:29:56.192766 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 12:29:56.192793 580663 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 12:29:56.247589 580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 12:29:56.247617 580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 12:29:56.259937 580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:29:56.262988 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 12:29:56.263013 580663 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 12:29:56.291947 580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 12:29:56.291975 580663 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 12:29:56.334662 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 12:29:56.334684 580663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 12:29:56.346674 580663 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:29:56.346705 580663 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 12:29:56.406320 580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 12:29:56.435903 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 12:29:56.435941 580663 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 12:29:56.520423 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 12:29:56.520450 580663 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 12:29:56.549376 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:56.549414 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:56.549765 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:56.549785 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:56.549795 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:56.549817 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:56.549838 580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
I0120 12:29:56.550308 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:56.550325 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:56.565213 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:56.565245 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:56.565606 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:56.565629 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:56.619007 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 12:29:56.619039 580663 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 12:29:56.732894 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 12:29:56.732942 580663 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 12:29:56.864261 580663 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:29:56.864282 580663 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 12:29:56.893833 580663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 12:29:57.402805 580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.142824049s)
I0120 12:29:57.402860 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:57.402872 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:57.403187 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:57.403224 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:57.403228 580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
I0120 12:29:57.403240 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:57.403251 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:57.403645 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:57.403661 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:58.156343 580663 pod_ready.go:103] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:29:58.212022 580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.805656545s)
I0120 12:29:58.212073 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:58.212089 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:58.212421 580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
I0120 12:29:58.212472 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:58.212484 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:58.212492 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:58.212502 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:58.212754 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:58.212776 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:58.212787 580663 addons.go:479] Verifying addon metrics-server=true in "no-preload-677886"
I0120 12:29:59.132234 580663 pod_ready.go:93] pod "etcd-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:29:59.132257 580663 pod_ready.go:82] duration metric: took 3.009475203s for pod "etcd-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:29:59.132266 580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:29:59.535990 580663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.642103584s)
I0120 12:29:59.536050 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:59.536065 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:59.537910 580663 main.go:141] libmachine: (no-preload-677886) DBG | Closing plugin on server side
I0120 12:29:59.537945 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:59.537960 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:59.537969 580663 main.go:141] libmachine: Making call to close driver server
I0120 12:29:59.537974 580663 main.go:141] libmachine: (no-preload-677886) Calling .Close
I0120 12:29:59.540301 580663 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:29:59.540320 580663 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:29:59.542169 580663 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-677886 addons enable metrics-server
I0120 12:29:59.543685 580663 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0120 12:29:59.544960 580663 addons.go:514] duration metric: took 3.716966822s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0120 12:30:01.140019 580663 pod_ready.go:103] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"False"
I0120 12:30:01.640096 580663 pod_ready.go:93] pod "kube-apiserver-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:30:01.640124 580663 pod_ready.go:82] duration metric: took 2.507849401s for pod "kube-apiserver-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:30:01.640139 580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:30:02.647785 580663 pod_ready.go:93] pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:30:02.647813 580663 pod_ready.go:82] duration metric: took 1.007665809s for pod "kube-controller-manager-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:30:02.647829 580663 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:30:02.652782 580663 pod_ready.go:93] pod "kube-scheduler-no-preload-677886" in "kube-system" namespace has status "Ready":"True"
I0120 12:30:02.652809 580663 pod_ready.go:82] duration metric: took 4.97098ms for pod "kube-scheduler-no-preload-677886" in "kube-system" namespace to be "Ready" ...
I0120 12:30:02.652821 580663 pod_ready.go:39] duration metric: took 6.536455725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:30:02.652839 580663 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:30:02.652893 580663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:30:02.669503 580663 api_server.go:72] duration metric: took 6.84155672s to wait for apiserver process to appear ...
I0120 12:30:02.669532 580663 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:30:02.669555 580663 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
I0120 12:30:02.674523 580663 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
ok
I0120 12:30:02.675672 580663 api_server.go:141] control plane version: v1.32.0
I0120 12:30:02.675695 580663 api_server.go:131] duration metric: took 6.15459ms to wait for apiserver health ...
I0120 12:30:02.675705 580663 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 12:30:02.680997 580663 system_pods.go:59] 9 kube-system pods found
I0120 12:30:02.681020 580663 system_pods.go:61] "coredns-668d6bf9bc-9xmv8" [341d3c31-11b2-4764-98bf-e97ec1a50fd2] Running
I0120 12:30:02.681025 580663 system_pods.go:61] "coredns-668d6bf9bc-wsnqr" [be77eebd-ba8c-42a5-acf0-dbe37c295e78] Running
I0120 12:30:02.681028 580663 system_pods.go:61] "etcd-no-preload-677886" [6df18fe2-2b6d-4ffb-8f91-ce21e0adc82c] Running
I0120 12:30:02.681032 580663 system_pods.go:61] "kube-apiserver-no-preload-677886" [db6208f0-66c4-46d0-9ee8-5dfe2a6ba67e] Running
I0120 12:30:02.681036 580663 system_pods.go:61] "kube-controller-manager-no-preload-677886" [bc9fd099-51fd-4d05-b8b2-496516d0afdd] Running
I0120 12:30:02.681039 580663 system_pods.go:61] "kube-proxy-7mw9s" [c53d64fd-036a-45a3-bef6-852216c16650] Running
I0120 12:30:02.681042 580663 system_pods.go:61] "kube-scheduler-no-preload-677886" [9ff2c632-77fa-4591-9d06-597df8321a9b] Running
I0120 12:30:02.681047 580663 system_pods.go:61] "metrics-server-f79f97bbb-4c528" [c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 12:30:02.681051 580663 system_pods.go:61] "storage-provisioner" [0df3fd9b-b206-4ecd-86cb-60d39e1bf6c1] Running
I0120 12:30:02.681057 580663 system_pods.go:74] duration metric: took 5.346355ms to wait for pod list to return data ...
I0120 12:30:02.681065 580663 default_sa.go:34] waiting for default service account to be created ...
I0120 12:30:02.683574 580663 default_sa.go:45] found service account: "default"
I0120 12:30:02.683592 580663 default_sa.go:55] duration metric: took 2.522551ms for default service account to be created ...
I0120 12:30:02.683599 580663 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 12:30:02.689661 580663 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-677886 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-677886 -n no-preload-677886
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-677886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-677886 logs -n 25: (1.347010533s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | sudo cat | | | | | |
| | /etc/kube-flannel/cni-conf.json | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | systemctl status kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo cat | | | | | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo cat | | | | | |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | cat /etc/docker/daemon.json | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | docker system info | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo cat | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p custom-flannel-912009 sudo cat | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | cri-dockerd --version | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo cat | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | sudo cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | containerd config dump | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | find /etc/crio -type f -exec | | | | | |
| | sh -c 'echo {}; cat {}' \; | | | | | |
| ssh | -p custom-flannel-912009 sudo | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
| | crio config | | | | | |
| delete | -p custom-flannel-912009 | custom-flannel-912009 | jenkins | v1.35.0 | 20 Jan 25 12:36 UTC | 20 Jan 25 12:36 UTC |
|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 12:34:55
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 12:34:55.317626 593695 out.go:345] Setting OutFile to fd 1 ...
I0120 12:34:55.318098 593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:34:55.318140 593695 out.go:358] Setting ErrFile to fd 2...
I0120 12:34:55.318166 593695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:34:55.318820 593695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-530330/.minikube/bin
I0120 12:34:55.319727 593695 out.go:352] Setting JSON to false
I0120 12:34:55.321284 593695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8237,"bootTime":1737368258,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0120 12:34:55.321400 593695 start.go:139] virtualization: kvm guest
I0120 12:34:55.323443 593695 out.go:177] * [custom-flannel-912009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0120 12:34:55.325326 593695 out.go:177] - MINIKUBE_LOCATION=20151
I0120 12:34:55.325338 593695 notify.go:220] Checking for updates...
I0120 12:34:55.328258 593695 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 12:34:55.329657 593695 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20151-530330/kubeconfig
I0120 12:34:55.331093 593695 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-530330/.minikube
I0120 12:34:55.332440 593695 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0120 12:34:55.333657 593695 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 12:34:55.335502 593695 config.go:182] Loaded profile config "calico-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:34:55.335654 593695 config.go:182] Loaded profile config "embed-certs-565837": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:34:55.335772 593695 config.go:182] Loaded profile config "no-preload-677886": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:34:55.335906 593695 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 12:34:55.378824 593695 out.go:177] * Using the kvm2 driver based on user configuration
I0120 12:34:55.380206 593695 start.go:297] selected driver: kvm2
I0120 12:34:55.380226 593695 start.go:901] validating driver "kvm2" against <nil>
I0120 12:34:55.380239 593695 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 12:34:55.380924 593695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:34:55.380997 593695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20151-530330/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 12:34:55.398891 593695 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0120 12:34:55.398946 593695 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0120 12:34:55.399228 593695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:34:55.399267 593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
I0120 12:34:55.399286 593695 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
I0120 12:34:55.399352 593695 start.go:340] cluster config:
{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:34:55.399486 593695 iso.go:125] acquiring lock: {Name:mk734d848ce0e9a68d8d00ecbd0f5085f599b42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 12:34:55.402211 593695 out.go:177] * Starting "custom-flannel-912009" primary control-plane node in "custom-flannel-912009" cluster
I0120 12:34:55.403487 593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:34:55.403526 593695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
I0120 12:34:55.403534 593695 cache.go:56] Caching tarball of preloaded images
I0120 12:34:55.403644 593695 preload.go:172] Found /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0120 12:34:55.403657 593695 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
I0120 12:34:55.403760 593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
I0120 12:34:55.403781 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json: {Name:mk1f5bd3895f8f37884cdb08f1e892c201dc31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:34:55.403947 593695 start.go:360] acquireMachinesLock for custom-flannel-912009: {Name:mkcd5f2d114897136dd2343f9fcf468e718657e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 12:34:55.403984 593695 start.go:364] duration metric: took 19.852µs to acquireMachinesLock for "custom-flannel-912009"
I0120 12:34:55.404004 593695 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flanne
l-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:34:55.404078 593695 start.go:125] createHost starting for "" (driver="kvm2")
I0120 12:34:54.418015 591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
I0120 12:34:56.418900 591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
I0120 12:34:58.918122 591909 node_ready.go:53] node "calico-912009" has status "Ready":"False"
I0120 12:34:55.405689 593695 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I0120 12:34:55.405857 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:34:55.405898 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:34:55.421394 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
I0120 12:34:55.421940 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:34:55.422589 593695 main.go:141] libmachine: Using API Version 1
I0120 12:34:55.422629 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:34:55.423222 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:34:55.423525 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
I0120 12:34:55.423711 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:34:55.423949 593695 start.go:159] libmachine.API.Create for "custom-flannel-912009" (driver="kvm2")
I0120 12:34:55.424001 593695 client.go:168] LocalClient.Create starting
I0120 12:34:55.424053 593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem
I0120 12:34:55.424104 593695 main.go:141] libmachine: Decoding PEM data...
I0120 12:34:55.424127 593695 main.go:141] libmachine: Parsing certificate...
I0120 12:34:55.424219 593695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem
I0120 12:34:55.424244 593695 main.go:141] libmachine: Decoding PEM data...
I0120 12:34:55.424262 593695 main.go:141] libmachine: Parsing certificate...
I0120 12:34:55.424287 593695 main.go:141] libmachine: Running pre-create checks...
I0120 12:34:55.424305 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .PreCreateCheck
I0120 12:34:55.424734 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
I0120 12:34:55.425305 593695 main.go:141] libmachine: Creating machine...
I0120 12:34:55.425318 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Create
I0120 12:34:55.425495 593695 main.go:141] libmachine: (custom-flannel-912009) creating KVM machine...
I0120 12:34:55.425519 593695 main.go:141] libmachine: (custom-flannel-912009) creating network...
I0120 12:34:55.426842 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found existing default KVM network
I0120 12:34:55.428088 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.427921 593717 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:62:a8} reservation:<nil>}
I0120 12:34:55.429366 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.429267 593717 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001194e0}
I0120 12:34:55.429388 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | created network xml:
I0120 12:34:55.429399 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <network>
I0120 12:34:55.429409 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <name>mk-custom-flannel-912009</name>
I0120 12:34:55.429417 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <dns enable='no'/>
I0120 12:34:55.429422 593695 main.go:141] libmachine: (custom-flannel-912009) DBG |
I0120 12:34:55.429440 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <ip address='192.168.50.1' netmask='255.255.255.0'>
I0120 12:34:55.429448 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <dhcp>
I0120 12:34:55.429459 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | <range start='192.168.50.2' end='192.168.50.253'/>
I0120 12:34:55.429475 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | </dhcp>
I0120 12:34:55.429487 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | </ip>
I0120 12:34:55.429497 593695 main.go:141] libmachine: (custom-flannel-912009) DBG |
I0120 12:34:55.429513 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | </network>
I0120 12:34:55.429524 593695 main.go:141] libmachine: (custom-flannel-912009) DBG |
I0120 12:34:55.434573 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | trying to create private KVM network mk-custom-flannel-912009 192.168.50.0/24...
I0120 12:34:55.523742 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | private KVM network mk-custom-flannel-912009 192.168.50.0/24 created
I0120 12:34:55.523770 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.523396 593717 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20151-530330/.minikube
I0120 12:34:55.523822 593695 main.go:141] libmachine: (custom-flannel-912009) setting up store path in /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
I0120 12:34:55.523855 593695 main.go:141] libmachine: (custom-flannel-912009) building disk image from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0120 12:34:55.523992 593695 main.go:141] libmachine: (custom-flannel-912009) Downloading /home/jenkins/minikube-integration/20151-530330/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20151-530330/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0120 12:34:55.815001 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:55.814810 593717 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa...
I0120 12:34:56.245898 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245727 593717 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk...
I0120 12:34:56.245930 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing magic tar header
I0120 12:34:56.245949 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Writing SSH key tar header
I0120 12:34:56.245964 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:56.245896 593717 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 ...
I0120 12:34:56.245994 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009
I0120 12:34:56.246097 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube/machines
I0120 12:34:56.246128 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009 (perms=drwx------)
I0120 12:34:56.246141 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330/.minikube
I0120 12:34:56.246172 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube/machines (perms=drwxr-xr-x)
I0120 12:34:56.246200 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20151-530330
I0120 12:34:56.246212 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330/.minikube (perms=drwxr-xr-x)
I0120 12:34:56.246229 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration/20151-530330 (perms=drwxrwxr-x)
I0120 12:34:56.246238 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0120 12:34:56.246247 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0120 12:34:56.246258 593695 main.go:141] libmachine: (custom-flannel-912009) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0120 12:34:56.246265 593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
I0120 12:34:56.246277 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home/jenkins
I0120 12:34:56.246285 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | checking permissions on dir: /home
I0120 12:34:56.246295 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | skipping /home - not owner
I0120 12:34:56.247428 593695 main.go:141] libmachine: (custom-flannel-912009) define libvirt domain using xml:
I0120 12:34:56.247449 593695 main.go:141] libmachine: (custom-flannel-912009) <domain type='kvm'>
I0120 12:34:56.247459 593695 main.go:141] libmachine: (custom-flannel-912009) <name>custom-flannel-912009</name>
I0120 12:34:56.247467 593695 main.go:141] libmachine: (custom-flannel-912009) <memory unit='MiB'>3072</memory>
I0120 12:34:56.247482 593695 main.go:141] libmachine: (custom-flannel-912009) <vcpu>2</vcpu>
I0120 12:34:56.247493 593695 main.go:141] libmachine: (custom-flannel-912009) <features>
I0120 12:34:56.247502 593695 main.go:141] libmachine: (custom-flannel-912009) <acpi/>
I0120 12:34:56.247525 593695 main.go:141] libmachine: (custom-flannel-912009) <apic/>
I0120 12:34:56.247552 593695 main.go:141] libmachine: (custom-flannel-912009) <pae/>
I0120 12:34:56.247575 593695 main.go:141] libmachine: (custom-flannel-912009)
I0120 12:34:56.247586 593695 main.go:141] libmachine: (custom-flannel-912009) </features>
I0120 12:34:56.247595 593695 main.go:141] libmachine: (custom-flannel-912009) <cpu mode='host-passthrough'>
I0120 12:34:56.247606 593695 main.go:141] libmachine: (custom-flannel-912009)
I0120 12:34:56.247615 593695 main.go:141] libmachine: (custom-flannel-912009) </cpu>
I0120 12:34:56.247625 593695 main.go:141] libmachine: (custom-flannel-912009) <os>
I0120 12:34:56.247635 593695 main.go:141] libmachine: (custom-flannel-912009) <type>hvm</type>
I0120 12:34:56.247644 593695 main.go:141] libmachine: (custom-flannel-912009) <boot dev='cdrom'/>
I0120 12:34:56.247658 593695 main.go:141] libmachine: (custom-flannel-912009) <boot dev='hd'/>
I0120 12:34:56.247670 593695 main.go:141] libmachine: (custom-flannel-912009) <bootmenu enable='no'/>
I0120 12:34:56.247682 593695 main.go:141] libmachine: (custom-flannel-912009) </os>
I0120 12:34:56.247690 593695 main.go:141] libmachine: (custom-flannel-912009) <devices>
I0120 12:34:56.247701 593695 main.go:141] libmachine: (custom-flannel-912009) <disk type='file' device='cdrom'>
I0120 12:34:56.247717 593695 main.go:141] libmachine: (custom-flannel-912009) <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/boot2docker.iso'/>
I0120 12:34:56.247732 593695 main.go:141] libmachine: (custom-flannel-912009) <target dev='hdc' bus='scsi'/>
I0120 12:34:56.247741 593695 main.go:141] libmachine: (custom-flannel-912009) <readonly/>
I0120 12:34:56.247748 593695 main.go:141] libmachine: (custom-flannel-912009) </disk>
I0120 12:34:56.247776 593695 main.go:141] libmachine: (custom-flannel-912009) <disk type='file' device='disk'>
I0120 12:34:56.247790 593695 main.go:141] libmachine: (custom-flannel-912009) <driver name='qemu' type='raw' cache='default' io='threads' />
I0120 12:34:56.247828 593695 main.go:141] libmachine: (custom-flannel-912009) <source file='/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/custom-flannel-912009.rawdisk'/>
I0120 12:34:56.247852 593695 main.go:141] libmachine: (custom-flannel-912009) <target dev='hda' bus='virtio'/>
I0120 12:34:56.247876 593695 main.go:141] libmachine: (custom-flannel-912009) </disk>
I0120 12:34:56.247896 593695 main.go:141] libmachine: (custom-flannel-912009) <interface type='network'>
I0120 12:34:56.247910 593695 main.go:141] libmachine: (custom-flannel-912009) <source network='mk-custom-flannel-912009'/>
I0120 12:34:56.247921 593695 main.go:141] libmachine: (custom-flannel-912009) <model type='virtio'/>
I0120 12:34:56.247932 593695 main.go:141] libmachine: (custom-flannel-912009) </interface>
I0120 12:34:56.247939 593695 main.go:141] libmachine: (custom-flannel-912009) <interface type='network'>
I0120 12:34:56.247951 593695 main.go:141] libmachine: (custom-flannel-912009) <source network='default'/>
I0120 12:34:56.247968 593695 main.go:141] libmachine: (custom-flannel-912009) <model type='virtio'/>
I0120 12:34:56.247979 593695 main.go:141] libmachine: (custom-flannel-912009) </interface>
I0120 12:34:56.247989 593695 main.go:141] libmachine: (custom-flannel-912009) <serial type='pty'>
I0120 12:34:56.247999 593695 main.go:141] libmachine: (custom-flannel-912009) <target port='0'/>
I0120 12:34:56.248009 593695 main.go:141] libmachine: (custom-flannel-912009) </serial>
I0120 12:34:56.248018 593695 main.go:141] libmachine: (custom-flannel-912009) <console type='pty'>
I0120 12:34:56.248033 593695 main.go:141] libmachine: (custom-flannel-912009) <target type='serial' port='0'/>
I0120 12:34:56.248044 593695 main.go:141] libmachine: (custom-flannel-912009) </console>
I0120 12:34:56.248063 593695 main.go:141] libmachine: (custom-flannel-912009) <rng model='virtio'>
I0120 12:34:56.248077 593695 main.go:141] libmachine: (custom-flannel-912009) <backend model='random'>/dev/random</backend>
I0120 12:34:56.248087 593695 main.go:141] libmachine: (custom-flannel-912009) </rng>
I0120 12:34:56.248098 593695 main.go:141] libmachine: (custom-flannel-912009)
I0120 12:34:56.248108 593695 main.go:141] libmachine: (custom-flannel-912009)
I0120 12:34:56.248126 593695 main.go:141] libmachine: (custom-flannel-912009) </devices>
I0120 12:34:56.248143 593695 main.go:141] libmachine: (custom-flannel-912009) </domain>
I0120 12:34:56.248157 593695 main.go:141] libmachine: (custom-flannel-912009)
I0120 12:34:56.251886 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:5c:75:87 in network default
I0120 12:34:56.252644 593695 main.go:141] libmachine: (custom-flannel-912009) starting domain...
I0120 12:34:56.252667 593695 main.go:141] libmachine: (custom-flannel-912009) ensuring networks are active...
I0120 12:34:56.252679 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:56.253478 593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network default is active
I0120 12:34:56.253856 593695 main.go:141] libmachine: (custom-flannel-912009) Ensuring network mk-custom-flannel-912009 is active
I0120 12:34:56.254478 593695 main.go:141] libmachine: (custom-flannel-912009) getting domain XML...
I0120 12:34:56.255132 593695 main.go:141] libmachine: (custom-flannel-912009) creating domain...
I0120 12:34:57.617443 593695 main.go:141] libmachine: (custom-flannel-912009) waiting for IP...
I0120 12:34:57.618468 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:57.618975 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:57.619079 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.618982 593717 retry.go:31] will retry after 310.833975ms: waiting for domain to come up
I0120 12:34:57.931884 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:57.932609 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:57.932671 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:57.932587 593717 retry.go:31] will retry after 389.24926ms: waiting for domain to come up
I0120 12:34:58.323123 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:58.323741 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:58.323766 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.323662 593717 retry.go:31] will retry after 328.51544ms: waiting for domain to come up
I0120 12:34:58.654475 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:58.654999 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:58.655031 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:58.654972 593717 retry.go:31] will retry after 459.188002ms: waiting for domain to come up
I0120 12:34:59.115485 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:59.116075 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:59.116099 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.116039 593717 retry.go:31] will retry after 671.328829ms: waiting for domain to come up
I0120 12:34:59.788826 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:34:59.789486 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:34:59.789535 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:34:59.789441 593717 retry.go:31] will retry after 722.417342ms: waiting for domain to come up
I0120 12:35:00.417246 591909 node_ready.go:49] node "calico-912009" has status "Ready":"True"
I0120 12:35:00.417269 591909 node_ready.go:38] duration metric: took 8.003348027s for node "calico-912009" to be "Ready" ...
I0120 12:35:00.417280 591909 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:35:00.427079 591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
I0120 12:35:02.434616 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:00.513299 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:00.513926 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:00.513953 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:00.513882 593717 retry.go:31] will retry after 1.004102642s: waiting for domain to come up
I0120 12:35:01.520257 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:01.520856 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:01.520887 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:01.520792 593717 retry.go:31] will retry after 1.187548146s: waiting for domain to come up
I0120 12:35:02.710370 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:02.710926 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:02.710960 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:02.710891 593717 retry.go:31] will retry after 1.130666152s: waiting for domain to come up
I0120 12:35:03.843031 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:03.843591 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:03.843657 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:03.843573 593717 retry.go:31] will retry after 2.084857552s: waiting for domain to come up
I0120 12:35:04.932987 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:06.934911 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:05.930313 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:05.930995 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:05.931129 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:05.931024 593717 retry.go:31] will retry after 2.721943033s: waiting for domain to come up
I0120 12:35:08.655556 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:08.656095 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:08.656125 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:08.656041 593717 retry.go:31] will retry after 3.50397462s: waiting for domain to come up
I0120 12:35:09.434933 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:11.938250 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:12.161925 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:12.162527 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:12.162555 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:12.162507 593717 retry.go:31] will retry after 4.028021149s: waiting for domain to come up
I0120 12:35:14.433852 591909 pod_ready.go:103] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:16.936370 591909 pod_ready.go:93] pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:16.936407 591909 pod_ready.go:82] duration metric: took 16.509299944s for pod "calico-kube-controllers-5745477d4d-mz446" in "kube-system" namespace to be "Ready" ...
I0120 12:35:16.936423 591909 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
I0120 12:35:18.944599 591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:16.192015 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:16.192673 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find current IP address of domain custom-flannel-912009 in network mk-custom-flannel-912009
I0120 12:35:16.192705 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | I0120 12:35:16.192623 593717 retry.go:31] will retry after 4.250339401s: waiting for domain to come up
I0120 12:35:21.444844 591909 pod_ready.go:103] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"False"
I0120 12:35:23.961659 591909 pod_ready.go:93] pod "calico-node-58f5q" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:23.961686 591909 pod_ready.go:82] duration metric: took 7.025255499s for pod "calico-node-58f5q" in "kube-system" namespace to be "Ready" ...
I0120 12:35:23.961697 591909 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
I0120 12:35:23.986722 591909 pod_ready.go:93] pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:23.986746 591909 pod_ready.go:82] duration metric: took 25.042668ms for pod "coredns-668d6bf9bc-qtrbt" in "kube-system" namespace to be "Ready" ...
I0120 12:35:23.986757 591909 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:23.996405 591909 pod_ready.go:93] pod "etcd-calico-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:23.996431 591909 pod_ready.go:82] duration metric: took 9.66769ms for pod "etcd-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:23.996443 591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.005532 591909 pod_ready.go:93] pod "kube-apiserver-calico-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:24.005568 591909 pod_ready.go:82] duration metric: took 9.117419ms for pod "kube-apiserver-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.005586 591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.014286 591909 pod_ready.go:93] pod "kube-controller-manager-calico-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:24.014320 591909 pod_ready.go:82] duration metric: took 8.724239ms for pod "kube-controller-manager-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.014336 591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
I0120 12:35:20.444937 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:20.445623 593695 main.go:141] libmachine: (custom-flannel-912009) found domain IP: 192.168.50.190
I0120 12:35:20.445652 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has current primary IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:20.445660 593695 main.go:141] libmachine: (custom-flannel-912009) reserving static IP address...
I0120 12:35:20.446017 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "custom-flannel-912009", mac: "52:54:00:d9:0c:b1", ip: "192.168.50.190"} in network mk-custom-flannel-912009
I0120 12:35:20.527289 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
I0120 12:35:20.527318 593695 main.go:141] libmachine: (custom-flannel-912009) reserved static IP address 192.168.50.190 for domain custom-flannel-912009
I0120 12:35:20.527331 593695 main.go:141] libmachine: (custom-flannel-912009) waiting for SSH...
I0120 12:35:20.530131 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:20.530494 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009
I0120 12:35:20.530526 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | unable to find defined IP address of network mk-custom-flannel-912009 interface with MAC address 52:54:00:d9:0c:b1
I0120 12:35:20.530642 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
I0120 12:35:20.530670 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
I0120 12:35:20.530724 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 12:35:20.530748 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
I0120 12:35:20.530761 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
I0120 12:35:20.534553 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: exit status 255:
I0120 12:35:20.534581 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0120 12:35:20.534592 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | command : exit 0
I0120 12:35:20.534604 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | err : exit status 255
I0120 12:35:20.534639 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | output :
I0120 12:35:23.534852 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Getting to WaitForSSH function...
I0120 12:35:23.537219 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.537562 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:23.537593 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.537711 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH client type: external
I0120 12:35:23.537734 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa (-rw-------)
I0120 12:35:23.537766 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 12:35:23.537778 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | About to run SSH command:
I0120 12:35:23.537786 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | exit 0
I0120 12:35:23.666504 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | SSH cmd err, output: <nil>:
I0120 12:35:23.666844 593695 main.go:141] libmachine: (custom-flannel-912009) KVM machine creation complete
I0120 12:35:23.667202 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
I0120 12:35:23.667966 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:23.668197 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:23.668360 593695 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0120 12:35:23.668377 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
I0120 12:35:23.670153 593695 main.go:141] libmachine: Detecting operating system of created instance...
I0120 12:35:23.670169 593695 main.go:141] libmachine: Waiting for SSH to be available...
I0120 12:35:23.670175 593695 main.go:141] libmachine: Getting to WaitForSSH function...
I0120 12:35:23.670181 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:23.673109 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.673528 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:23.673551 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.673837 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:23.674105 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.674329 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.674532 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:23.674693 593695 main.go:141] libmachine: Using SSH client type: native
I0120 12:35:23.674971 593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.190 22 <nil> <nil>}
I0120 12:35:23.674989 593695 main.go:141] libmachine: About to run SSH command:
exit 0
I0120 12:35:23.781486 593695 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 12:35:23.781512 593695 main.go:141] libmachine: Detecting the provisioner...
I0120 12:35:23.781520 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:23.784548 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.785046 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:23.785077 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.785303 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:23.785511 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.785694 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.785856 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:23.786038 593695 main.go:141] libmachine: Using SSH client type: native
I0120 12:35:23.786249 593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.190 22 <nil> <nil>}
I0120 12:35:23.786263 593695 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0120 12:35:23.895060 593695 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0120 12:35:23.895164 593695 main.go:141] libmachine: found compatible host: buildroot
I0120 12:35:23.895185 593695 main.go:141] libmachine: Provisioning with buildroot...
I0120 12:35:23.895198 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
I0120 12:35:23.895470 593695 buildroot.go:166] provisioning hostname "custom-flannel-912009"
I0120 12:35:23.895510 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
I0120 12:35:23.895752 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:23.899661 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.900121 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:23.900148 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:23.900337 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:23.900565 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.900738 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:23.900892 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:23.901167 593695 main.go:141] libmachine: Using SSH client type: native
I0120 12:35:23.901402 593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.190 22 <nil> <nil>}
I0120 12:35:23.901418 593695 main.go:141] libmachine: About to run SSH command:
sudo hostname custom-flannel-912009 && echo "custom-flannel-912009" | sudo tee /etc/hostname
I0120 12:35:24.029708 593695 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-912009
I0120 12:35:24.029744 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.033017 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.033445 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.033478 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.033777 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.034045 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.034311 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.034484 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.034713 593695 main.go:141] libmachine: Using SSH client type: native
I0120 12:35:24.034960 593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.190 22 <nil> <nil>}
I0120 12:35:24.034989 593695 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\scustom-flannel-912009' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-912009/g' /etc/hosts;
else
echo '127.0.1.1 custom-flannel-912009' | sudo tee -a /etc/hosts;
fi
fi
I0120 12:35:24.155682 593695 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 12:35:24.155719 593695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20151-530330/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-530330/.minikube}
I0120 12:35:24.155742 593695 buildroot.go:174] setting up certificates
I0120 12:35:24.155752 593695 provision.go:84] configureAuth start
I0120 12:35:24.155761 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetMachineName
I0120 12:35:24.156072 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
I0120 12:35:24.159246 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.159526 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.159559 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.159719 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.162295 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.162595 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.162622 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.162796 593695 provision.go:143] copyHostCerts
I0120 12:35:24.162871 593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem, removing ...
I0120 12:35:24.162897 593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem
I0120 12:35:24.163012 593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/key.pem (1675 bytes)
I0120 12:35:24.163166 593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem, removing ...
I0120 12:35:24.163182 593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem
I0120 12:35:24.163224 593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/ca.pem (1078 bytes)
I0120 12:35:24.163301 593695 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem, removing ...
I0120 12:35:24.163311 593695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem
I0120 12:35:24.163352 593695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-530330/.minikube/cert.pem (1123 bytes)
I0120 12:35:24.163530 593695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-912009 san=[127.0.0.1 192.168.50.190 custom-flannel-912009 localhost minikube]
I0120 12:35:24.241848 593695 provision.go:177] copyRemoteCerts
I0120 12:35:24.241916 593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 12:35:24.241950 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.244770 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.245114 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.245138 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.245331 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.245514 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.245668 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.245760 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:24.332818 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 12:35:24.361699 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0120 12:35:24.391399 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 12:35:24.418431 593695 provision.go:87] duration metric: took 262.665168ms to configureAuth
I0120 12:35:24.418473 593695 buildroot.go:189] setting minikube options for container-runtime
I0120 12:35:24.418753 593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:35:24.418792 593695 main.go:141] libmachine: Checking connection to Docker...
I0120 12:35:24.418805 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetURL
I0120 12:35:24.420068 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | using libvirt version 6000000
I0120 12:35:24.422715 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.423162 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.423190 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.423456 593695 main.go:141] libmachine: Docker is up and running!
I0120 12:35:24.423476 593695 main.go:141] libmachine: Reticulating splines...
I0120 12:35:24.423486 593695 client.go:171] duration metric: took 28.999470441s to LocalClient.Create
I0120 12:35:24.423515 593695 start.go:167] duration metric: took 28.999566096s to libmachine.API.Create "custom-flannel-912009"
I0120 12:35:24.423528 593695 start.go:293] postStartSetup for "custom-flannel-912009" (driver="kvm2")
I0120 12:35:24.423542 593695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 12:35:24.423569 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:24.423829 593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 12:35:24.423855 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.426268 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.426582 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.426609 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.426817 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.427012 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.427219 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.427395 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:24.509285 593695 ssh_runner.go:195] Run: cat /etc/os-release
I0120 12:35:24.513984 593695 info.go:137] Remote host: Buildroot 2023.02.9
I0120 12:35:24.514016 593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/addons for local assets ...
I0120 12:35:24.514091 593695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-530330/.minikube/files for local assets ...
I0120 12:35:24.514173 593695 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem -> 5375812.pem in /etc/ssl/certs
I0120 12:35:24.514260 593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 12:35:24.523956 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /etc/ssl/certs/5375812.pem (1708 bytes)
I0120 12:35:24.553908 593695 start.go:296] duration metric: took 130.36042ms for postStartSetup
I0120 12:35:24.553975 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetConfigRaw
I0120 12:35:24.554680 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
I0120 12:35:24.557887 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.558360 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.558399 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.558632 593695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/config.json ...
I0120 12:35:24.558858 593695 start.go:128] duration metric: took 29.154769177s to createHost
I0120 12:35:24.558884 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.561339 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.561943 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.561994 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.562136 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.562360 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.562560 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.562828 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.563024 593695 main.go:141] libmachine: Using SSH client type: native
I0120 12:35:24.563258 593695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.190 22 <nil> <nil>}
I0120 12:35:24.563273 593695 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 12:35:24.671152 593695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737376524.647779402
I0120 12:35:24.671177 593695 fix.go:216] guest clock: 1737376524.647779402
I0120 12:35:24.671187 593695 fix.go:229] Guest: 2025-01-20 12:35:24.647779402 +0000 UTC Remote: 2025-01-20 12:35:24.558871919 +0000 UTC m=+29.288117911 (delta=88.907483ms)
I0120 12:35:24.671208 593695 fix.go:200] guest clock delta is within tolerance: 88.907483ms
I0120 12:35:24.671213 593695 start.go:83] releasing machines lock for "custom-flannel-912009", held for 29.26722146s
I0120 12:35:24.671257 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:24.671597 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
I0120 12:35:24.674668 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.675144 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.675179 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.675303 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:24.675888 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:24.676102 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:24.676270 593695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 12:35:24.676339 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.676389 593695 ssh_runner.go:195] Run: cat /version.json
I0120 12:35:24.676418 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:24.679423 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.679453 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.679849 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.679890 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:24.679912 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.679941 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:24.680114 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.680284 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:24.680292 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.680454 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.680472 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:24.680601 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:24.680657 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:24.680719 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:24.767818 593695 ssh_runner.go:195] Run: systemctl --version
I0120 12:35:24.795757 593695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 12:35:24.801932 593695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 12:35:24.802005 593695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 12:35:24.822047 593695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 12:35:24.822074 593695 start.go:495] detecting cgroup driver to use...
I0120 12:35:24.822147 593695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 12:35:24.853585 593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 12:35:24.869225 593695 docker.go:217] disabling cri-docker service (if available) ...
I0120 12:35:24.869302 593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 12:35:24.883816 593695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 12:35:24.897972 593695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 12:35:25.028005 593695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 12:35:25.171259 593695 docker.go:233] disabling docker service ...
I0120 12:35:25.171345 593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 12:35:25.187813 593695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 12:35:25.201348 593695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 12:35:24.343295 591909 pod_ready.go:93] pod "kube-proxy-d42xv" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:24.343328 591909 pod_ready.go:82] duration metric: took 328.982488ms for pod "kube-proxy-d42xv" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.343343 591909 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.741158 591909 pod_ready.go:93] pod "kube-scheduler-calico-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:35:24.741188 591909 pod_ready.go:82] duration metric: took 397.835554ms for pod "kube-scheduler-calico-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:35:24.741204 591909 pod_ready.go:39] duration metric: took 24.323905541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:35:24.741225 591909 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:35:24.741287 591909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:35:24.758948 591909 api_server.go:72] duration metric: took 33.170230566s to wait for apiserver process to appear ...
I0120 12:35:24.758984 591909 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:35:24.759013 591909 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8443/healthz ...
I0120 12:35:24.763591 591909 api_server.go:279] https://192.168.61.244:8443/healthz returned 200:
ok
I0120 12:35:24.764729 591909 api_server.go:141] control plane version: v1.32.0
I0120 12:35:24.764761 591909 api_server.go:131] duration metric: took 5.768981ms to wait for apiserver health ...
I0120 12:35:24.764772 591909 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 12:35:24.947474 591909 system_pods.go:59] 9 kube-system pods found
I0120 12:35:24.947535 591909 system_pods.go:61] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
I0120 12:35:24.947545 591909 system_pods.go:61] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
I0120 12:35:24.947551 591909 system_pods.go:61] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
I0120 12:35:24.947555 591909 system_pods.go:61] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
I0120 12:35:24.947560 591909 system_pods.go:61] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
I0120 12:35:24.947565 591909 system_pods.go:61] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
I0120 12:35:24.947570 591909 system_pods.go:61] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
I0120 12:35:24.947574 591909 system_pods.go:61] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
I0120 12:35:24.947579 591909 system_pods.go:61] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
I0120 12:35:24.947587 591909 system_pods.go:74] duration metric: took 182.808552ms to wait for pod list to return data ...
I0120 12:35:24.947598 591909 default_sa.go:34] waiting for default service account to be created ...
I0120 12:35:25.141030 591909 default_sa.go:45] found service account: "default"
I0120 12:35:25.141064 591909 default_sa.go:55] duration metric: took 193.459842ms for default service account to be created ...
I0120 12:35:25.141074 591909 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 12:35:25.345280 591909 system_pods.go:87] 9 kube-system pods found
I0120 12:35:25.541923 591909 system_pods.go:105] "calico-kube-controllers-5745477d4d-mz446" [84466c15-f6c8-4e5e-9e75-a9f5712ec8e6] Running
I0120 12:35:25.541949 591909 system_pods.go:105] "calico-node-58f5q" [4c659cf9-7e8b-4f9e-a251-005a41562c7c] Running
I0120 12:35:25.541955 591909 system_pods.go:105] "coredns-668d6bf9bc-qtrbt" [2bf73e76-3e51-4775-931e-49299625214f] Running
I0120 12:35:25.541960 591909 system_pods.go:105] "etcd-calico-912009" [39631069-4624-4ede-8433-ccc68d866eaa] Running
I0120 12:35:25.541965 591909 system_pods.go:105] "kube-apiserver-calico-912009" [50d0f21d-f92e-4c26-8dfc-e37ed39827cb] Running
I0120 12:35:25.541969 591909 system_pods.go:105] "kube-controller-manager-calico-912009" [1f3aef6d-59c0-4413-aa4e-6e23c8881f78] Running
I0120 12:35:25.541974 591909 system_pods.go:105] "kube-proxy-d42xv" [3d24c7d5-50b1-4871-bc05-74fd339a3e0b] Running
I0120 12:35:25.541981 591909 system_pods.go:105] "kube-scheduler-calico-912009" [927218e7-10b5-472b-accc-e139302981f3] Running
I0120 12:35:25.541993 591909 system_pods.go:105] "storage-provisioner" [2124f06a-3841-4d00-85f3-6c7001d3d30d] Running
I0120 12:35:25.542005 591909 system_pods.go:147] duration metric: took 400.9237ms to wait for k8s-apps to be running ...
I0120 12:35:25.542022 591909 system_svc.go:44] waiting for kubelet service to be running ....
I0120 12:35:25.542076 591909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 12:35:25.559267 591909 system_svc.go:56] duration metric: took 17.236172ms WaitForService to wait for kubelet
I0120 12:35:25.559301 591909 kubeadm.go:582] duration metric: took 33.970593024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:35:25.559343 591909 node_conditions.go:102] verifying NodePressure condition ...
I0120 12:35:25.741320 591909 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 12:35:25.741363 591909 node_conditions.go:123] node cpu capacity is 2
I0120 12:35:25.741379 591909 node_conditions.go:105] duration metric: took 182.030441ms to run NodePressure ...
I0120 12:35:25.741395 591909 start.go:241] waiting for startup goroutines ...
I0120 12:35:25.741405 591909 start.go:246] waiting for cluster config update ...
I0120 12:35:25.741426 591909 start.go:255] writing updated cluster config ...
I0120 12:35:25.798226 591909 ssh_runner.go:195] Run: rm -f paused
I0120 12:35:25.864008 591909 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 12:35:25.935661 591909 out.go:177] * Done! kubectl is now configured to use "calico-912009" cluster and "default" namespace by default
I0120 12:35:25.355950 593695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 12:35:25.488046 593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 12:35:25.503617 593695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 12:35:25.524909 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 12:35:25.535904 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 12:35:25.548267 593695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 12:35:25.548339 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 12:35:25.559155 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:35:25.569907 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 12:35:25.581371 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 12:35:25.593457 593695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 12:35:25.605028 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 12:35:25.617300 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 12:35:25.629598 593695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 12:35:25.641451 593695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 12:35:25.653746 593695 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 12:35:25.653896 593695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 12:35:25.669029 593695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 12:35:25.682069 593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:35:25.826095 593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 12:35:25.865783 593695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 12:35:25.865871 593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:35:25.871185 593695 retry.go:31] will retry after 1.23432325s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0120 12:35:27.105977 593695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 12:35:27.111951 593695 start.go:563] Will wait 60s for crictl version
I0120 12:35:27.112034 593695 ssh_runner.go:195] Run: which crictl
I0120 12:35:27.116737 593695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 12:35:27.161217 593695 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0120 12:35:27.161291 593695 ssh_runner.go:195] Run: containerd --version
I0120 12:35:27.190230 593695 ssh_runner.go:195] Run: containerd --version
I0120 12:35:27.219481 593695 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
I0120 12:35:27.220968 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetIP
I0120 12:35:27.223799 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:27.224137 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:27.224161 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:27.224394 593695 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0120 12:35:27.228599 593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:35:27.242027 593695 kubeadm.go:883] updating cluster {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 12:35:27.242166 593695 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:35:27.242266 593695 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:35:27.280733 593695 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
I0120 12:35:27.280808 593695 ssh_runner.go:195] Run: which lz4
I0120 12:35:27.285414 593695 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0120 12:35:27.290608 593695 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0120 12:35:27.290637 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398081533 bytes)
I0120 12:35:28.842033 593695 containerd.go:563] duration metric: took 1.556664096s to copy over tarball
I0120 12:35:28.842105 593695 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0120 12:35:31.289395 593695 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.44725613s)
I0120 12:35:31.289429 593695 containerd.go:570] duration metric: took 2.44736643s to extract the tarball
I0120 12:35:31.289440 593695 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0120 12:35:31.333681 593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:35:31.450015 593695 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 12:35:31.481159 593695 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:35:31.540445 593695 retry.go:31] will retry after 180.029348ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T12:35:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0120 12:35:31.720933 593695 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:35:31.764494 593695 containerd.go:627] all images are preloaded for containerd runtime.
I0120 12:35:31.764524 593695 cache_images.go:84] Images are preloaded, skipping loading
I0120 12:35:31.764532 593695 kubeadm.go:934] updating node { 192.168.50.190 8443 v1.32.0 containerd true true} ...
I0120 12:35:31.764644 593695 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-912009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.190
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
I0120 12:35:31.764699 593695 ssh_runner.go:195] Run: sudo crictl info
I0120 12:35:31.801010 593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
I0120 12:35:31.801048 593695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 12:35:31.801070 593695 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.190 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-912009 NodeName:custom-flannel-912009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 12:35:31.801206 593695 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.190
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "custom-flannel-912009"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.50.190"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.190"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 12:35:31.801295 593695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 12:35:31.812630 593695 binaries.go:44] Found k8s binaries, skipping transfer
I0120 12:35:31.812728 593695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 12:35:31.823817 593695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
I0120 12:35:31.842930 593695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 12:35:31.861044 593695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2317 bytes)
I0120 12:35:31.880051 593695 ssh_runner.go:195] Run: grep 192.168.50.190 control-plane.minikube.internal$ /etc/hosts
I0120 12:35:31.884576 593695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.190 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 12:35:31.898346 593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:35:32.028778 593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:35:32.052796 593695 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009 for IP: 192.168.50.190
I0120 12:35:32.052827 593695 certs.go:194] generating shared ca certs ...
I0120 12:35:32.052845 593695 certs.go:226] acquiring lock for ca certs: {Name:mk52c62007c989bdf47cf8ee68bb49e4d4d8996b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:32.053075 593695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key
I0120 12:35:32.053147 593695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key
I0120 12:35:32.053163 593695 certs.go:256] generating profile certs ...
I0120 12:35:32.053247 593695 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key
I0120 12:35:32.053279 593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt with IP's: []
I0120 12:35:32.452867 593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt ...
I0120 12:35:32.452901 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.crt: {Name:mk835ad9719695d1ab06cc7c134d449ff4a8ec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:32.453073 593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key ...
I0120 12:35:32.453086 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/client.key: {Name:mk5dcd2ed981e6e4fa3ffc179551607c1e7c7c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:32.460567 593695 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc
I0120 12:35:32.460603 593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.190]
I0120 12:35:32.709471 593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc ...
I0120 12:35:32.709507 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc: {Name:mkecfe0edd1856a9b879cb97ff718bab280ced2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:32.709699 593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc ...
I0120 12:35:32.709716 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc: {Name:mk6d882a97424f5468af12647844aaa949a2932d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:32.709838 593695 certs.go:381] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt
I0120 12:35:32.709950 593695 certs.go:385] copying /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key.77137fdc -> /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key
I0120 12:35:32.710022 593695 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key
I0120 12:35:32.710036 593695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt with IP's: []
I0120 12:35:33.008294 593695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt ...
I0120 12:35:33.008328 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt: {Name:mk49acca2ab8ab3a30e85bb0e3b8b16095040d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:33.008501 593695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key ...
I0120 12:35:33.008514 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key: {Name:mkc4e59c474ddf1c18711f46c3fda8af2d43d2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:33.008678 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem (1338 bytes)
W0120 12:35:33.008717 593695 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581_empty.pem, impossibly tiny 0 bytes
I0120 12:35:33.008726 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca-key.pem (1679 bytes)
I0120 12:35:33.008747 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/ca.pem (1078 bytes)
I0120 12:35:33.008801 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/cert.pem (1123 bytes)
I0120 12:35:33.008830 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/certs/key.pem (1675 bytes)
I0120 12:35:33.008869 593695 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem (1708 bytes)
I0120 12:35:33.009450 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 12:35:33.037734 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 12:35:33.078488 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 12:35:33.105293 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 12:35:33.130922 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0120 12:35:33.156034 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 12:35:33.181145 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 12:35:33.209991 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/profiles/custom-flannel-912009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 12:35:33.236891 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 12:35:33.263012 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/certs/537581.pem --> /usr/share/ca-certificates/537581.pem (1338 bytes)
I0120 12:35:33.291892 593695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-530330/.minikube/files/etc/ssl/certs/5375812.pem --> /usr/share/ca-certificates/5375812.pem (1708 bytes)
I0120 12:35:33.320316 593695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 12:35:33.339826 593695 ssh_runner.go:195] Run: openssl version
I0120 12:35:33.346196 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 12:35:33.360216 593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 12:35:33.365369 593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:16 /usr/share/ca-certificates/minikubeCA.pem
I0120 12:35:33.365457 593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 12:35:33.371913 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 12:35:33.384511 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537581.pem && ln -fs /usr/share/ca-certificates/537581.pem /etc/ssl/certs/537581.pem"
I0120 12:35:33.396943 593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537581.pem
I0120 12:35:33.402006 593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:24 /usr/share/ca-certificates/537581.pem
I0120 12:35:33.402094 593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537581.pem
I0120 12:35:33.408421 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537581.pem /etc/ssl/certs/51391683.0"
I0120 12:35:33.422913 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5375812.pem && ln -fs /usr/share/ca-certificates/5375812.pem /etc/ssl/certs/5375812.pem"
I0120 12:35:33.446953 593695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5375812.pem
I0120 12:35:33.460154 593695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:24 /usr/share/ca-certificates/5375812.pem
I0120 12:35:33.460243 593695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5375812.pem
I0120 12:35:33.473049 593695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5375812.pem /etc/ssl/certs/3ec20f2e.0"
I0120 12:35:33.494370 593695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 12:35:33.499833 593695 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0120 12:35:33.499899 593695 kubeadm.go:392] StartCluster: {Name:custom-flannel-912009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-912009 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 12:35:33.500002 593695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 12:35:33.500097 593695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 12:35:33.554921 593695 cri.go:89] found id: ""
I0120 12:35:33.555004 593695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 12:35:33.567155 593695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 12:35:33.579445 593695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 12:35:33.597705 593695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 12:35:33.597735 593695 kubeadm.go:157] found existing configuration files:
I0120 12:35:33.597796 593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 12:35:33.610082 593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 12:35:33.610143 593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 12:35:33.620572 593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 12:35:33.630336 593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 12:35:33.630477 593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 12:35:33.642367 593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 12:35:33.654203 593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 12:35:33.654285 593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 12:35:33.666300 593695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 12:35:33.678958 593695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 12:35:33.679034 593695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 12:35:33.690383 593695 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0120 12:35:33.751799 593695 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 12:35:33.751856 593695 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 12:35:33.868316 593695 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 12:35:33.868495 593695 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 12:35:33.868635 593695 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 12:35:33.878015 593695 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 12:35:33.880879 593695 out.go:235] - Generating certificates and keys ...
I0120 12:35:33.880991 593695 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 12:35:33.881075 593695 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 12:35:34.118211 593695 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0120 12:35:34.268264 593695 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0120 12:35:34.395094 593695 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0120 12:35:34.615258 593695 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0120 12:35:34.840828 593695 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0120 12:35:34.841049 593695 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
I0120 12:35:34.980318 593695 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0120 12:35:34.980559 593695 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-912009 localhost] and IPs [192.168.50.190 127.0.0.1 ::1]
I0120 12:35:35.340147 593695 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0120 12:35:35.661731 593695 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0120 12:35:35.819536 593695 kubeadm.go:310] [certs] Generating "sa" key and public key
I0120 12:35:35.819789 593695 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 12:35:36.025686 593695 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 12:35:36.151576 593695 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 12:35:36.213677 593695 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 12:35:36.370255 593695 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 12:35:36.699839 593695 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 12:35:36.702474 593695 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 12:35:36.706508 593695 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 12:35:36.708260 593695 out.go:235] - Booting up control plane ...
I0120 12:35:36.708404 593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 12:35:36.708515 593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 12:35:36.708618 593695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 12:35:36.727916 593695 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 12:35:36.734985 593695 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 12:35:36.735050 593695 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 12:35:36.891554 593695 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 12:35:36.891696 593695 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 12:35:37.892390 593695 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001463848s
I0120 12:35:37.892535 593695 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 12:35:42.892060 593695 kubeadm.go:310] [api-check] The API server is healthy after 5.002045649s
I0120 12:35:42.907359 593695 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 12:35:42.923769 593695 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 12:35:42.947405 593695 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 12:35:42.947611 593695 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-912009 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 12:35:42.957385 593695 kubeadm.go:310] [bootstrap-token] Using token: pwfscc.y1n10nfegb7ld7mi
I0120 12:35:42.958829 593695 out.go:235] - Configuring RBAC rules ...
I0120 12:35:42.958983 593695 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 12:35:42.963002 593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 12:35:42.972421 593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 12:35:42.976005 593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 12:35:42.981865 593695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 12:35:42.985056 593695 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 12:35:43.299543 593695 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 12:35:43.743871 593695 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 12:35:44.299948 593695 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 12:35:44.304043 593695 kubeadm.go:310]
I0120 12:35:44.304135 593695 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 12:35:44.304148 593695 kubeadm.go:310]
I0120 12:35:44.304271 593695 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 12:35:44.304306 593695 kubeadm.go:310]
I0120 12:35:44.304374 593695 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 12:35:44.304467 593695 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 12:35:44.304538 593695 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 12:35:44.304551 593695 kubeadm.go:310]
I0120 12:35:44.304616 593695 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 12:35:44.304627 593695 kubeadm.go:310]
I0120 12:35:44.304689 593695 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 12:35:44.304699 593695 kubeadm.go:310]
I0120 12:35:44.304767 593695 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 12:35:44.304884 593695 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 12:35:44.304988 593695 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 12:35:44.305012 593695 kubeadm.go:310]
I0120 12:35:44.305132 593695 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 12:35:44.305245 593695 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 12:35:44.305260 593695 kubeadm.go:310]
I0120 12:35:44.305359 593695 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
I0120 12:35:44.305494 593695 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3 \
I0120 12:35:44.305524 593695 kubeadm.go:310] --control-plane
I0120 12:35:44.305529 593695 kubeadm.go:310]
I0120 12:35:44.305630 593695 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 12:35:44.305636 593695 kubeadm.go:310]
I0120 12:35:44.305725 593695 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pwfscc.y1n10nfegb7ld7mi \
I0120 12:35:44.305865 593695 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:326640d5f51fa6eddf3fd6f2b38f5a08d4608620129e8898d45359839be856c3
I0120 12:35:44.309010 593695 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 12:35:44.309072 593695 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
I0120 12:35:44.311925 593695 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
I0120 12:35:44.313463 593695 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
I0120 12:35:44.313529 593695 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
I0120 12:35:44.319726 593695 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
I0120 12:35:44.319758 593695 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
I0120 12:35:44.351216 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0120 12:35:44.868640 593695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 12:35:44.868740 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:44.868782 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-912009 minikube.k8s.io/updated_at=2025_01_20T12_35_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=custom-flannel-912009 minikube.k8s.io/primary=true
I0120 12:35:45.116669 593695 ops.go:34] apiserver oom_adj: -16
I0120 12:35:45.116816 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:45.617431 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:46.117712 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:46.616896 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:47.117662 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:47.617183 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:48.116968 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:48.616887 593695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 12:35:48.749904 593695 kubeadm.go:1113] duration metric: took 3.881252521s to wait for elevateKubeSystemPrivileges
I0120 12:35:48.749953 593695 kubeadm.go:394] duration metric: took 15.250058721s to StartCluster
I0120 12:35:48.749980 593695 settings.go:142] acquiring lock: {Name:mkbafde306c71e7b8958e2377ddfa5a9e3a59113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:48.750089 593695 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20151-530330/kubeconfig
I0120 12:35:48.752036 593695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-530330/kubeconfig: {Name:mk113e13541afa8413ea8a359169b0824f5f9ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 12:35:48.752297 593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0120 12:35:48.752305 593695 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 12:35:48.752376 593695 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 12:35:48.752503 593695 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-912009"
I0120 12:35:48.752529 593695 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-912009"
I0120 12:35:48.752553 593695 config.go:182] Loaded profile config "custom-flannel-912009": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:35:48.752573 593695 host.go:66] Checking if "custom-flannel-912009" exists ...
I0120 12:35:48.752614 593695 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-912009"
I0120 12:35:48.752635 593695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-912009"
I0120 12:35:48.753033 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:35:48.753071 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:35:48.753077 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:35:48.753115 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:35:48.754038 593695 out.go:177] * Verifying Kubernetes components...
I0120 12:35:48.755543 593695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 12:35:48.770900 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
I0120 12:35:48.770924 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
I0120 12:35:48.771512 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:35:48.771523 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:35:48.771980 593695 main.go:141] libmachine: Using API Version 1
I0120 12:35:48.771999 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:35:48.772120 593695 main.go:141] libmachine: Using API Version 1
I0120 12:35:48.772167 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:35:48.772407 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:35:48.772581 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
I0120 12:35:48.772694 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:35:48.773172 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:35:48.773221 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:35:48.775953 593695 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-912009"
I0120 12:35:48.775985 593695 host.go:66] Checking if "custom-flannel-912009" exists ...
I0120 12:35:48.776217 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:35:48.776242 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:35:48.791662 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
I0120 12:35:48.791918 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
I0120 12:35:48.792260 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:35:48.792600 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:35:48.792770 593695 main.go:141] libmachine: Using API Version 1
I0120 12:35:48.792789 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:35:48.793183 593695 main.go:141] libmachine: Using API Version 1
I0120 12:35:48.793202 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:35:48.793265 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:35:48.793756 593695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:35:48.793790 593695 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:35:48.793902 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:35:48.794308 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
I0120 12:35:48.796179 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:48.798629 593695 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 12:35:48.800337 593695 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:35:48.800353 593695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 12:35:48.800370 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:48.803462 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:48.803925 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:48.803956 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:48.804206 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:48.804403 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:48.804565 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:48.804707 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:48.811596 593695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
I0120 12:35:48.811951 593695 main.go:141] libmachine: () Calling .GetVersion
I0120 12:35:48.812485 593695 main.go:141] libmachine: Using API Version 1
I0120 12:35:48.812512 593695 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:35:48.812866 593695 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:35:48.813065 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetState
I0120 12:35:48.814819 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .DriverName
I0120 12:35:48.814988 593695 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 12:35:48.814999 593695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 12:35:48.815012 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHHostname
I0120 12:35:48.817477 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:48.817881 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:0c:b1", ip: ""} in network mk-custom-flannel-912009: {Iface:virbr2 ExpiryTime:2025-01-20 13:35:12 +0000 UTC Type:0 Mac:52:54:00:d9:0c:b1 Iaid: IPaddr:192.168.50.190 Prefix:24 Hostname:custom-flannel-912009 Clientid:01:52:54:00:d9:0c:b1}
I0120 12:35:48.817910 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | domain custom-flannel-912009 has defined IP address 192.168.50.190 and MAC address 52:54:00:d9:0c:b1 in network mk-custom-flannel-912009
I0120 12:35:48.818198 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHPort
I0120 12:35:48.818380 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHKeyPath
I0120 12:35:48.818527 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .GetSSHUsername
I0120 12:35:48.818657 593695 sshutil.go:53] new ssh client: &{IP:192.168.50.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20151-530330/.minikube/machines/custom-flannel-912009/id_rsa Username:docker}
I0120 12:35:49.140129 593695 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 12:35:49.140225 593695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0120 12:35:49.271376 593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 12:35:49.277298 593695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 12:35:49.757630 593695 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
I0120 12:35:49.759580 593695 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-912009" to be "Ready" ...
I0120 12:35:50.126202 593695 main.go:141] libmachine: Making call to close driver server
I0120 12:35:50.126240 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
I0120 12:35:50.126243 593695 main.go:141] libmachine: Making call to close driver server
I0120 12:35:50.126267 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
I0120 12:35:50.126553 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
I0120 12:35:50.126589 593695 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:35:50.126596 593695 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:35:50.126602 593695 main.go:141] libmachine: Making call to close driver server
I0120 12:35:50.126608 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
I0120 12:35:50.126719 593695 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:35:50.126731 593695 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:35:50.126764 593695 main.go:141] libmachine: (custom-flannel-912009) DBG | Closing plugin on server side
I0120 12:35:50.126851 593695 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:35:50.126869 593695 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:35:50.126891 593695 main.go:141] libmachine: Making call to close driver server
I0120 12:35:50.126902 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
I0120 12:35:50.127111 593695 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:35:50.127122 593695 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:35:50.137124 593695 main.go:141] libmachine: Making call to close driver server
I0120 12:35:50.137145 593695 main.go:141] libmachine: (custom-flannel-912009) Calling .Close
I0120 12:35:50.137540 593695 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:35:50.137572 593695 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:35:50.139205 593695 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0120 12:35:50.140687 593695 addons.go:514] duration metric: took 1.388318596s for enable addons: enabled=[storage-provisioner default-storageclass]
I0120 12:35:50.263249 593695 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-912009" context rescaled to 1 replicas
I0120 12:35:51.764008 593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
I0120 12:35:53.764278 593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
I0120 12:35:56.267054 593695 node_ready.go:53] node "custom-flannel-912009" has status "Ready":"False"
I0120 12:35:56.762993 593695 node_ready.go:49] node "custom-flannel-912009" has status "Ready":"True"
I0120 12:35:56.763021 593695 node_ready.go:38] duration metric: took 7.003409226s for node "custom-flannel-912009" to be "Ready" ...
I0120 12:35:56.763031 593695 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:35:56.774021 593695 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
I0120 12:35:58.781485 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:01.281717 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:03.281973 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:05.779798 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:07.781018 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:09.781624 593695 pod_ready.go:103] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"False"
I0120 12:36:12.283171 593695 pod_ready.go:93] pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.283202 593695 pod_ready.go:82] duration metric: took 15.509154098s for pod "coredns-668d6bf9bc-zcgzt" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.283215 593695 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.288965 593695 pod_ready.go:93] pod "etcd-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.288990 593695 pod_ready.go:82] duration metric: took 5.767908ms for pod "etcd-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.289000 593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.293688 593695 pod_ready.go:93] pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.293716 593695 pod_ready.go:82] duration metric: took 4.708111ms for pod "kube-apiserver-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.293729 593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.297788 593695 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.297826 593695 pod_ready.go:82] duration metric: took 4.088036ms for pod "kube-controller-manager-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.297840 593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.301911 593695 pod_ready.go:93] pod "kube-proxy-v6hzk" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.301932 593695 pod_ready.go:82] duration metric: took 4.084396ms for pod "kube-proxy-v6hzk" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.301941 593695 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.678978 593695 pod_ready.go:93] pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace has status "Ready":"True"
I0120 12:36:12.679012 593695 pod_ready.go:82] duration metric: took 377.062726ms for pod "kube-scheduler-custom-flannel-912009" in "kube-system" namespace to be "Ready" ...
I0120 12:36:12.679029 593695 pod_ready.go:39] duration metric: took 15.915986454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 12:36:12.679050 593695 api_server.go:52] waiting for apiserver process to appear ...
I0120 12:36:12.679114 593695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 12:36:12.695820 593695 api_server.go:72] duration metric: took 23.943481333s to wait for apiserver process to appear ...
I0120 12:36:12.695857 593695 api_server.go:88] waiting for apiserver healthz status ...
I0120 12:36:12.695891 593695 api_server.go:253] Checking apiserver healthz at https://192.168.50.190:8443/healthz ...
I0120 12:36:12.700809 593695 api_server.go:279] https://192.168.50.190:8443/healthz returned 200:
ok
I0120 12:36:12.701918 593695 api_server.go:141] control plane version: v1.32.0
I0120 12:36:12.701948 593695 api_server.go:131] duration metric: took 6.082216ms to wait for apiserver health ...
I0120 12:36:12.701958 593695 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 12:36:12.882081 593695 system_pods.go:59] 7 kube-system pods found
I0120 12:36:12.882124 593695 system_pods.go:61] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
I0120 12:36:12.882133 593695 system_pods.go:61] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
I0120 12:36:12.882140 593695 system_pods.go:61] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
I0120 12:36:12.882146 593695 system_pods.go:61] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
I0120 12:36:12.882152 593695 system_pods.go:61] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
I0120 12:36:12.882157 593695 system_pods.go:61] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
I0120 12:36:12.882163 593695 system_pods.go:61] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
I0120 12:36:12.882171 593695 system_pods.go:74] duration metric: took 180.205562ms to wait for pod list to return data ...
I0120 12:36:12.882184 593695 default_sa.go:34] waiting for default service account to be created ...
I0120 12:36:13.078402 593695 default_sa.go:45] found service account: "default"
I0120 12:36:13.078437 593695 default_sa.go:55] duration metric: took 196.244937ms for default service account to be created ...
I0120 12:36:13.078449 593695 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 12:36:13.281225 593695 system_pods.go:87] 7 kube-system pods found
I0120 12:36:13.479438 593695 system_pods.go:105] "coredns-668d6bf9bc-zcgzt" [a4599587-8acf-43f9-a149-178f1cc35aa0] Running
I0120 12:36:13.479469 593695 system_pods.go:105] "etcd-custom-flannel-912009" [6fb49a98-624e-43ed-850a-8a9c63dd40fc] Running
I0120 12:36:13.479478 593695 system_pods.go:105] "kube-apiserver-custom-flannel-912009" [4341c7c9-5d0f-4740-a7af-971594286c38] Running
I0120 12:36:13.479485 593695 system_pods.go:105] "kube-controller-manager-custom-flannel-912009" [0db8a018-592b-4019-a02b-b3565937d695] Running
I0120 12:36:13.479491 593695 system_pods.go:105] "kube-proxy-v6hzk" [e2019ab7-b2fc-48ac-86d2-c014ff8e07c8] Running
I0120 12:36:13.479496 593695 system_pods.go:105] "kube-scheduler-custom-flannel-912009" [f739f365-2d5e-45ee-90d9-6e67ba46401a] Running
I0120 12:36:13.479501 593695 system_pods.go:105] "storage-provisioner" [0f702c35-7c57-44be-aa95-58d0e3c4a56a] Running
I0120 12:36:13.479511 593695 system_pods.go:147] duration metric: took 401.053197ms to wait for k8s-apps to be running ...
I0120 12:36:13.479520 593695 system_svc.go:44] waiting for kubelet service to be running ....
I0120 12:36:13.479592 593695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 12:36:13.495091 593695 system_svc.go:56] duration metric: took 15.558739ms WaitForService to wait for kubelet
I0120 12:36:13.495133 593695 kubeadm.go:582] duration metric: took 24.742796954s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 12:36:13.495185 593695 node_conditions.go:102] verifying NodePressure condition ...
I0120 12:36:13.679355 593695 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 12:36:13.679383 593695 node_conditions.go:123] node cpu capacity is 2
I0120 12:36:13.679395 593695 node_conditions.go:105] duration metric: took 184.200741ms to run NodePressure ...
I0120 12:36:13.679407 593695 start.go:241] waiting for startup goroutines ...
I0120 12:36:13.679413 593695 start.go:246] waiting for cluster config update ...
I0120 12:36:13.679423 593695 start.go:255] writing updated cluster config ...
I0120 12:36:13.679733 593695 ssh_runner.go:195] Run: rm -f paused
I0120 12:36:13.731412 593695 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 12:36:13.733373 593695 out.go:177] * Done! kubectl is now configured to use "custom-flannel-912009" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
5cf4af7a2d8ca 523cad1a4df73 4 minutes ago Exited dashboard-metrics-scraper 8 06c61c21e245f dashboard-metrics-scraper-86c6bf9756-vsd89
17e498bec13d8 07655ddf2eebe 20 minutes ago Running kubernetes-dashboard 0 c2dc07b18735a kubernetes-dashboard-7779f9b69b-tcsgt
81d92b6a56c07 6e38f40d628db 20 minutes ago Running storage-provisioner 0 21c007d43c3b5 storage-provisioner
76a885717143a c69fa2e9cbf5f 20 minutes ago Running coredns 0 60f0b0896a631 coredns-668d6bf9bc-9xmv8
e4e354bee1c02 c69fa2e9cbf5f 20 minutes ago Running coredns 0 0ae9eb49fb8bd coredns-668d6bf9bc-wsnqr
bb046d57f0b60 040f9f8aac8cd 20 minutes ago Running kube-proxy 0 108e5a42c5c32 kube-proxy-7mw9s
e79e55fb70131 a389e107f4ff1 20 minutes ago Running kube-scheduler 2 f2d16d62a70b6 kube-scheduler-no-preload-677886
57f630813f13f a9e7e6b294baf 20 minutes ago Running etcd 2 33d206163798c etcd-no-preload-677886
f7985b0045eb2 c2e17b8d0f4a3 20 minutes ago Running kube-apiserver 2 2422f768df827 kube-apiserver-no-preload-677886
857b30c51caac 8cab3d2a8bd0f 20 minutes ago Running kube-controller-manager 2 3907591c5b2d9 kube-controller-manager-no-preload-677886
==> containerd <==
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.674656527Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:7,} returns container id \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.675781444Z" level=info msg="StartContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.753746186Z" level=info msg="StartContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\" returns successfully"
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793602129Z" level=info msg="shim disconnected" id=9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af namespace=k8s.io
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793707460Z" level=warning msg="cleaning up after shim disconnected" id=9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af namespace=k8s.io
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.793743803Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.919941071Z" level=info msg="RemoveContainer for \"5fc207c30fb37cc7662422bb462355a0b2a3325ea14f35acaecb5a3258661ebe\""
Jan 20 12:40:54 no-preload-677886 containerd[557]: time="2025-01-20T12:40:54.935054987Z" level=info msg="RemoveContainer for \"5fc207c30fb37cc7662422bb462355a0b2a3325ea14f35acaecb5a3258661ebe\" returns successfully"
Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.653076809Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.675325437Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.677659249Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 20 12:40:57 no-preload-677886 containerd[557]: time="2025-01-20T12:40:57.677705767Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.655076259Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.680430537Z" level=info msg="CreateContainer within sandbox \"06c61c21e245f21d22c7241510c19d05fc20da1c4b46effe147cd0c8adf1a148\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\""
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.681879982Z" level=info msg="StartContainer for \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\""
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.777313112Z" level=info msg="StartContainer for \"5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9\" returns successfully"
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838372711Z" level=info msg="shim disconnected" id=5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9 namespace=k8s.io
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838693055Z" level=warning msg="cleaning up after shim disconnected" id=5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9 namespace=k8s.io
Jan 20 12:46:01 no-preload-677886 containerd[557]: time="2025-01-20T12:46:01.838842383Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 12:46:02 no-preload-677886 containerd[557]: time="2025-01-20T12:46:02.674029185Z" level=info msg="RemoveContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\""
Jan 20 12:46:02 no-preload-677886 containerd[557]: time="2025-01-20T12:46:02.686018126Z" level=info msg="RemoveContainer for \"9aff5a1a01cc5e0f456cd075e02f2ee4a9760f6b840f90d370636b0f44b7e6af\" returns successfully"
Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.652806490Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.672930998Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.675268782Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 20 12:46:07 no-preload-677886 containerd[557]: time="2025-01-20T12:46:07.675358695Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [76a885717143af6da5b22aad50e2f6b5cc735ca978b03ead96d09b801a042ff8] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [e4e354bee1c02e245f4ee1aa584e4f9c33452a74cb3e59b6d4e1c4a23dbe13af] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: no-preload-677886
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-677886
kubernetes.io/os=linux
minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
minikube.k8s.io/name=no-preload-677886
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T12_29_52_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 12:29:48 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: no-preload-677886
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 12:50:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 12:48:32 +0000 Mon, 20 Jan 2025 12:29:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 12:48:32 +0000 Mon, 20 Jan 2025 12:29:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 12:48:32 +0000 Mon, 20 Jan 2025 12:29:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 12:48:32 +0000 Mon, 20 Jan 2025 12:29:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.72.157
Hostname: no-preload-677886
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: 9a347a9ff01e4e74b3ae9e6ad1ac1fad
System UUID: 9a347a9f-f01e-4e74-b3ae-9e6ad1ac1fad
Boot ID: 635a9d1b-a517-4374-bca0-3a9cf43ae5f1
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.0
Kube-Proxy Version: v1.32.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-9xmv8 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 20m
kube-system coredns-668d6bf9bc-wsnqr 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 20m
kube-system etcd-no-preload-677886 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 20m
kube-system kube-apiserver-no-preload-677886 250m (12%) 0 (0%) 0 (0%) 0 (0%) 20m
kube-system kube-controller-manager-no-preload-677886 200m (10%) 0 (0%) 0 (0%) 0 (0%) 20m
kube-system kube-proxy-7mw9s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
kube-system kube-scheduler-no-preload-677886 100m (5%) 0 (0%) 0 (0%) 0 (0%) 20m
kube-system metrics-server-f79f97bbb-4c528 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 20m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-vsd89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-tcsgt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 20m kube-proxy
Normal Starting 20m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 20m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 20m kubelet Node no-preload-677886 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 20m kubelet Node no-preload-677886 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 20m kubelet Node no-preload-677886 status is now: NodeHasSufficientPID
Normal RegisteredNode 20m node-controller Node no-preload-677886 event: Registered Node no-preload-677886 in Controller
==> dmesg <==
[ +0.054857] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.042833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +5.122155] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.701509] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.682325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.549216] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
[ +0.083447] kauditd_printk_skb: 1 callbacks suppressed
[ +0.073960] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
[ +0.219134] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
[ +0.130041] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
[ +0.340041] systemd-fstab-generator[549]: Ignoring "noauto" option for root device
[ +1.074118] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
[ +2.160341] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
[ +1.103771] kauditd_printk_skb: 245 callbacks suppressed
[ +5.165738] kauditd_printk_skb: 30 callbacks suppressed
[ +12.441907] kauditd_printk_skb: 72 callbacks suppressed
[Jan20 12:29] systemd-fstab-generator[3022]: Ignoring "noauto" option for root device
[ +6.599149] systemd-fstab-generator[3390]: Ignoring "noauto" option for root device
[ +0.100826] kauditd_printk_skb: 87 callbacks suppressed
[ +4.461564] systemd-fstab-generator[3487]: Ignoring "noauto" option for root device
[ +1.096108] kauditd_printk_skb: 34 callbacks suppressed
[Jan20 12:30] kauditd_printk_skb: 90 callbacks suppressed
[ +6.002434] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [57f630813f13f00958007f01fffdfbb131e4c40d6c4ca9d26a38b27dc1bb5ed5] <==
{"level":"info","ts":"2025-01-20T12:35:04.530740Z","caller":"traceutil/trace.go:171","msg":"trace[502739809] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:849; }","duration":"203.743885ms","start":"2025-01-20T12:35:04.326977Z","end":"2025-01-20T12:35:04.530721Z","steps":["trace[502739809] 'range keys from in-memory index tree' (duration: 202.507505ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T12:35:08.076076Z","caller":"traceutil/trace.go:171","msg":"trace[1805590902] transaction","detail":"{read_only:false; response_revision:854; number_of_response:1; }","duration":"215.268338ms","start":"2025-01-20T12:35:07.860786Z","end":"2025-01-20T12:35:08.076054Z","steps":["trace[1805590902] 'process raft request' (duration: 214.867161ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T12:35:25.875692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.260599ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814550993203984827 > lease_revoke:<id:192e9483b0c17922>","response":"size:28"}
{"level":"info","ts":"2025-01-20T12:35:25.876544Z","caller":"traceutil/trace.go:171","msg":"trace[1822305913] linearizableReadLoop","detail":"{readStateIndex:950; appliedIndex:948; }","duration":"101.185741ms","start":"2025-01-20T12:35:25.775343Z","end":"2025-01-20T12:35:25.876529Z","steps":["trace[1822305913] 'read index received' (duration: 90.61518ms)","trace[1822305913] 'applied index is now lower than readState.Index' (duration: 10.569489ms)"],"step_count":2}
{"level":"info","ts":"2025-01-20T12:35:25.877129Z","caller":"traceutil/trace.go:171","msg":"trace[1043700226] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"121.704233ms","start":"2025-01-20T12:35:25.755406Z","end":"2025-01-20T12:35:25.877110Z","steps":["trace[1043700226] 'process raft request' (duration: 120.902471ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T12:35:25.877879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.511357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T12:35:25.877947Z","caller":"traceutil/trace.go:171","msg":"trace[801620436] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:869; }","duration":"102.619411ms","start":"2025-01-20T12:35:25.775317Z","end":"2025-01-20T12:35:25.877936Z","steps":["trace[801620436] 'agreement among raft nodes before linearized reading' (duration: 101.291563ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T12:35:26.158550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.117677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T12:35:26.159042Z","caller":"traceutil/trace.go:171","msg":"trace[376424248] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:869; }","duration":"182.646818ms","start":"2025-01-20T12:35:25.976322Z","end":"2025-01-20T12:35:26.158968Z","steps":["trace[376424248] 'range keys from in-memory index tree' (duration: 181.932153ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T12:35:32.698796Z","caller":"traceutil/trace.go:171","msg":"trace[1499265984] linearizableReadLoop","detail":"{readStateIndex:957; appliedIndex:956; }","duration":"123.768592ms","start":"2025-01-20T12:35:32.575005Z","end":"2025-01-20T12:35:32.698774Z","steps":["trace[1499265984] 'read index received' (duration: 123.580369ms)","trace[1499265984] 'applied index is now lower than readState.Index' (duration: 187.305µs)"],"step_count":2}
{"level":"warn","ts":"2025-01-20T12:35:32.699009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.976115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T12:35:32.699039Z","caller":"traceutil/trace.go:171","msg":"trace[661945791] transaction","detail":"{read_only:false; response_revision:875; number_of_response:1; }","duration":"297.790763ms","start":"2025-01-20T12:35:32.401229Z","end":"2025-01-20T12:35:32.699020Z","steps":["trace[661945791] 'process raft request' (duration: 297.376729ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T12:35:32.699050Z","caller":"traceutil/trace.go:171","msg":"trace[483635226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"124.059903ms","start":"2025-01-20T12:35:32.574980Z","end":"2025-01-20T12:35:32.699039Z","steps":["trace[483635226] 'agreement among raft nodes before linearized reading' (duration: 123.958256ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T12:35:33.133839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.192543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T12:35:33.133967Z","caller":"traceutil/trace.go:171","msg":"trace[1140872822] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"358.372372ms","start":"2025-01-20T12:35:32.775575Z","end":"2025-01-20T12:35:33.133948Z","steps":["trace[1140872822] 'range keys from in-memory index tree' (duration: 358.120438ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T12:35:33.134031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:35:32.775560Z","time spent":"358.444333ms","remote":"127.0.0.1:56836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"info","ts":"2025-01-20T12:39:47.396602Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
{"level":"info","ts":"2025-01-20T12:39:47.440197Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":834,"took":"42.630709ms","hash":1825579988,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3035136,"current-db-size-in-use":"3.0 MB"}
{"level":"info","ts":"2025-01-20T12:39:47.440367Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1825579988,"revision":834,"compact-revision":-1}
{"level":"info","ts":"2025-01-20T12:44:47.405763Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1085}
{"level":"info","ts":"2025-01-20T12:44:47.410863Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1085,"took":"4.397719ms","hash":530143079,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-20T12:44:47.411052Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":530143079,"revision":1085,"compact-revision":834}
{"level":"info","ts":"2025-01-20T12:49:47.415315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1337}
{"level":"info","ts":"2025-01-20T12:49:47.420368Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1337,"took":"4.333345ms","hash":3473013662,"current-db-size-bytes":3035136,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-20T12:49:47.420478Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3473013662,"revision":1337,"compact-revision":1085}
==> kernel <==
12:50:36 up 25 min, 0 users, load average: 0.27, 0.33, 0.34
Linux no-preload-677886 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [f7985b0045eb2e8f6137597fe295b4f16ddea6cf369752b86b0769aa64dbcf2d] <==
I0120 12:45:49.943515 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 12:45:49.943580 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0120 12:47:49.944601 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 12:47:49.944720 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0120 12:47:49.944807 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 12:47:49.944898 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 12:47:49.946001 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 12:47:49.946055 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0120 12:49:48.940420 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 12:49:48.940702 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0120 12:49:49.942206 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 12:49:49.942280 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0120 12:49:49.942356 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 12:49:49.942431 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 12:49:49.943456 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 12:49:49.943539 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [857b30c51caaca20624f74f6273daea5d9f5faa387927e88cb41e57658c008fb] <==
E0120 12:45:55.728959 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:45:55.818653 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 12:46:02.694585 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="287.14µs"
I0120 12:46:08.961259 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="159.436µs"
I0120 12:46:20.667084 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="116.56µs"
E0120 12:46:25.736017 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:46:25.826593 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 12:46:33.669925 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="99.808µs"
E0120 12:46:55.743665 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:46:55.835354 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:47:25.751518 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:47:25.846895 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:47:55.758921 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:47:55.865629 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:48:25.765115 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:48:25.874999 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 12:48:32.756416 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-677886"
E0120 12:48:55.771528 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:48:55.883592 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:49:25.778842 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:49:25.893444 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:49:55.786648 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:49:55.903027 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 12:50:25.794786 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 12:50:25.913505 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [bb046d57f0b60ac605653c0ad3f1d1884f34f7c2e35bbc278da86697c901a81a] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0120 12:29:57.724249 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0120 12:29:57.796270 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.157"]
E0120 12:29:57.796346 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0120 12:29:58.259194 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0120 12:29:58.259420 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0120 12:29:58.259548 1 server_linux.go:170] "Using iptables Proxier"
I0120 12:29:58.282692 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0120 12:29:58.282963 1 server.go:497] "Version info" version="v1.32.0"
I0120 12:29:58.282977 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0120 12:29:58.317220 1 config.go:199] "Starting service config controller"
I0120 12:29:58.317250 1 shared_informer.go:313] Waiting for caches to sync for service config
I0120 12:29:58.317276 1 config.go:105] "Starting endpoint slice config controller"
I0120 12:29:58.317280 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0120 12:29:58.326715 1 config.go:329] "Starting node config controller"
I0120 12:29:58.326729 1 shared_informer.go:313] Waiting for caches to sync for node config
I0120 12:29:58.465517 1 shared_informer.go:320] Caches are synced for node config
I0120 12:29:58.465588 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0120 12:29:58.465602 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [e79e55fb70131a8b68edddf89a87c3809690ef5705041693b06a7f7f621f088f] <==
W0120 12:29:48.973835 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0120 12:29:48.973909 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 12:29:48.974226 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 12:29:48.974280 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 12:29:49.819868 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0120 12:29:49.820361 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 12:29:49.858472 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 12:29:49.859405 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0120 12:29:49.865195 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0120 12:29:49.865248 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 12:29:49.979497 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 12:29:49.979921 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.055332 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 12:29:50.055387 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.059664 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0120 12:29:50.060098 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.144969 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 12:29:50.145023 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.203965 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 12:29:50.204062 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.214364 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0120 12:29:50.214420 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 12:29:50.230114 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 12:29:50.230220 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0120 12:29:52.060525 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 12:49:12 no-preload-677886 kubelet[3397]: I0120 12:49:12.648949 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:49:12 no-preload-677886 kubelet[3397]: E0120 12:49:12.649518 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:49:20 no-preload-677886 kubelet[3397]: E0120 12:49:20.650671 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:49:27 no-preload-677886 kubelet[3397]: I0120 12:49:27.649276 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:49:27 no-preload-677886 kubelet[3397]: E0120 12:49:27.650479 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:49:32 no-preload-677886 kubelet[3397]: E0120 12:49:32.649214 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:49:39 no-preload-677886 kubelet[3397]: I0120 12:49:39.650725 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:49:39 no-preload-677886 kubelet[3397]: E0120 12:49:39.652075 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:49:47 no-preload-677886 kubelet[3397]: E0120 12:49:47.650059 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:49:51 no-preload-677886 kubelet[3397]: E0120 12:49:51.672862 3397 iptables.go:577] "Could not set up iptables canary" err=<
Jan 20 12:49:51 no-preload-677886 kubelet[3397]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 20 12:49:51 no-preload-677886 kubelet[3397]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 20 12:49:51 no-preload-677886 kubelet[3397]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 20 12:49:51 no-preload-677886 kubelet[3397]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 20 12:49:54 no-preload-677886 kubelet[3397]: I0120 12:49:54.648770 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:49:54 no-preload-677886 kubelet[3397]: E0120 12:49:54.648999 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:50:02 no-preload-677886 kubelet[3397]: E0120 12:50:02.650557 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:50:09 no-preload-677886 kubelet[3397]: I0120 12:50:09.649430 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:50:09 no-preload-677886 kubelet[3397]: E0120 12:50:09.650007 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:50:16 no-preload-677886 kubelet[3397]: E0120 12:50:16.650249 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:50:20 no-preload-677886 kubelet[3397]: I0120 12:50:20.649575 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:50:20 no-preload-677886 kubelet[3397]: E0120 12:50:20.650305 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
Jan 20 12:50:31 no-preload-677886 kubelet[3397]: E0120 12:50:31.650692 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-4c528" podUID="c970f3ba-5f5c-4cc5-8a4e-99fb56ba8778"
Jan 20 12:50:32 no-preload-677886 kubelet[3397]: I0120 12:50:32.648897 3397 scope.go:117] "RemoveContainer" containerID="5cf4af7a2d8ca846f4e9e80a7426685e46f57b5564f052cbce0d35c4d6b215a9"
Jan 20 12:50:32 no-preload-677886 kubelet[3397]: E0120 12:50:32.649102 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vsd89_kubernetes-dashboard(e8601724-da1c-4ada-9794-a7a65336042a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vsd89" podUID="e8601724-da1c-4ada-9794-a7a65336042a"
==> kubernetes-dashboard [17e498bec13d87a58929ba35ccaf56c4211c87612834d20a30470458bc856e1a] <==
2025/01/20 12:38:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:38:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:39:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:39:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:40:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:40:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:41:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:41:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:42:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:42:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:43:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:43:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:44:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:44:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:45:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:45:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:46:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:46:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:47:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:47:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:48:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:48:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:49:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:49:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 12:50:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [81d92b6a56c0744c4c3cc5e4db96cf8e4ecb0ce6ad938ce745291373662aaa95] <==
I0120 12:29:59.089360 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 12:29:59.115241 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 12:29:59.115351 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 12:29:59.156780 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 12:29:59.159991 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bde922e1-103e-4ced-9936-c8f670e9c9a5", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2 became leader
I0120 12:29:59.160093 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2!
I0120 12:29:59.267535 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-677886_c9d0d7a0-f72f-4cf4-89f8-d0760e9dcde2!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-677886 -n no-preload-677886
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-677886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-4c528
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528: exit status 1 (63.810161ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-4c528" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-677886 describe pod metrics-server-f79f97bbb-4c528: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1540.68s)