=== RUN TestAddons/Setup
addons_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m37.13871256s)
-- stdout --
* [addons-486748] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21683
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "addons-486748" primary control-plane node in "addons-486748" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
-- /stdout --
** stderr **
I1002 19:46:49.097080 14172 out.go:360] Setting OutFile to fd 1 ...
I1002 19:46:49.097331 14172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:46:49.097342 14172 out.go:374] Setting ErrFile to fd 2...
I1002 19:46:49.097347 14172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:46:49.097531 14172 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 19:46:49.098069 14172 out.go:368] Setting JSON to false
I1002 19:46:49.098897 14172 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1758,"bootTime":1759432651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1002 19:46:49.098983 14172 start.go:140] virtualization: kvm guest
I1002 19:46:49.100823 14172 out.go:179] * [addons-486748] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1002 19:46:49.102124 14172 out.go:179] - MINIKUBE_LOCATION=21683
I1002 19:46:49.102192 14172 notify.go:221] Checking for updates...
I1002 19:46:49.104547 14172 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 19:46:49.105783 14172 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
I1002 19:46:49.106797 14172 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
I1002 19:46:49.107825 14172 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1002 19:46:49.108854 14172 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 19:46:49.110054 14172 driver.go:422] Setting default libvirt URI to qemu:///system
I1002 19:46:49.133310 14172 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
I1002 19:46:49.133424 14172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 19:46:49.185386 14172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 19:46:49.175608895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 19:46:49.185522 14172 docker.go:319] overlay module found
I1002 19:46:49.187247 14172 out.go:179] * Using the docker driver based on user configuration
I1002 19:46:49.188768 14172 start.go:306] selected driver: docker
I1002 19:46:49.188791 14172 start.go:936] validating driver "docker" against <nil>
I1002 19:46:49.188804 14172 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 19:46:49.189411 14172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 19:46:49.241362 14172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 19:46:49.231985659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 19:46:49.241534 14172 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1002 19:46:49.241822 14172 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1002 19:46:49.243528 14172 out.go:179] * Using Docker driver with root privileges
I1002 19:46:49.244808 14172 cni.go:84] Creating CNI manager for ""
I1002 19:46:49.244878 14172 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 19:46:49.244890 14172 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1002 19:46:49.244961 14172 start.go:350] cluster config:
{Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
I1002 19:46:49.246245 14172 out.go:179] * Starting "addons-486748" primary control-plane node in "addons-486748" cluster
I1002 19:46:49.247401 14172 cache.go:124] Beginning downloading kic base image for docker with crio
I1002 19:46:49.248554 14172 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
I1002 19:46:49.249738 14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 19:46:49.249768 14172 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
I1002 19:46:49.249791 14172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1002 19:46:49.249808 14172 cache.go:59] Caching tarball of preloaded images
I1002 19:46:49.249928 14172 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1002 19:46:49.249944 14172 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1002 19:46:49.250350 14172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json ...
I1002 19:46:49.250376 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json: {Name:mk00a5c747d89203b93c17e2728b3edb4ad2afc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:46:49.266988 14172 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
I1002 19:46:49.267112 14172 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
I1002 19:46:49.267137 14172 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
I1002 19:46:49.267141 14172 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
I1002 19:46:49.267149 14172 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
I1002 19:46:49.267156 14172 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
I1002 19:47:01.617794 14172 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
I1002 19:47:01.617834 14172 cache.go:233] Successfully downloaded all kic artifacts
I1002 19:47:01.617871 14172 start.go:361] acquireMachinesLock for addons-486748: {Name:mk12f88a4445be3b9140c03872d799e59dbb6f60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 19:47:01.617984 14172 start.go:365] duration metric: took 91.828µs to acquireMachinesLock for "addons-486748"
I1002 19:47:01.618017 14172 start.go:94] Provisioning new machine with config: &{Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1002 19:47:01.618138 14172 start.go:126] createHost starting for "" (driver="docker")
I1002 19:47:01.620051 14172 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1002 19:47:01.620359 14172 start.go:160] libmachine.API.Create for "addons-486748" (driver="docker")
I1002 19:47:01.620396 14172 client.go:168] LocalClient.Create starting
I1002 19:47:01.620512 14172 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
I1002 19:47:01.666865 14172 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
I1002 19:47:01.895395 14172 cli_runner.go:164] Run: docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 19:47:01.911777 14172 cli_runner.go:211] docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 19:47:01.911849 14172 network_create.go:284] running [docker network inspect addons-486748] to gather additional debugging logs...
I1002 19:47:01.911869 14172 cli_runner.go:164] Run: docker network inspect addons-486748
W1002 19:47:01.927622 14172 cli_runner.go:211] docker network inspect addons-486748 returned with exit code 1
I1002 19:47:01.927662 14172 network_create.go:287] error running [docker network inspect addons-486748]: docker network inspect addons-486748: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-486748 not found
I1002 19:47:01.927685 14172 network_create.go:289] output of [docker network inspect addons-486748]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-486748 not found
** /stderr **
I1002 19:47:01.927823 14172 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 19:47:01.944235 14172 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00029d000}
I1002 19:47:01.944292 14172 network_create.go:124] attempt to create docker network addons-486748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 19:47:01.944342 14172 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-486748 addons-486748
I1002 19:47:01.999393 14172 network_create.go:108] docker network addons-486748 192.168.49.0/24 created
I1002 19:47:01.999423 14172 kic.go:121] calculated static IP "192.168.49.2" for the "addons-486748" container
I1002 19:47:01.999476 14172 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1002 19:47:02.015113 14172 cli_runner.go:164] Run: docker volume create addons-486748 --label name.minikube.sigs.k8s.io=addons-486748 --label created_by.minikube.sigs.k8s.io=true
I1002 19:47:02.032151 14172 oci.go:103] Successfully created a docker volume addons-486748
I1002 19:47:02.032222 14172 cli_runner.go:164] Run: docker run --rm --name addons-486748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --entrypoint /usr/bin/test -v addons-486748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
I1002 19:47:08.841052 14172 cli_runner.go:217] Completed: docker run --rm --name addons-486748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --entrypoint /usr/bin/test -v addons-486748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (6.808771103s)
I1002 19:47:08.841097 14172 oci.go:107] Successfully prepared a docker volume addons-486748
I1002 19:47:08.841125 14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 19:47:08.841146 14172 kic.go:194] Starting extracting preloaded images to volume ...
I1002 19:47:08.841196 14172 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-486748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
I1002 19:47:13.263298 14172 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-486748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.42206256s)
I1002 19:47:13.263328 14172 kic.go:203] duration metric: took 4.42217979s to extract preloaded images to volume ...
W1002 19:47:13.263441 14172 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1002 19:47:13.263483 14172 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1002 19:47:13.263519 14172 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1002 19:47:13.317362 14172 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-486748 --name addons-486748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-486748 --network addons-486748 --ip 192.168.49.2 --volume addons-486748:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
I1002 19:47:13.600036 14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Running}}
I1002 19:47:13.618884 14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
I1002 19:47:13.639013 14172 cli_runner.go:164] Run: docker exec addons-486748 stat /var/lib/dpkg/alternatives/iptables
I1002 19:47:13.683853 14172 oci.go:144] the created container "addons-486748" has a running status.
I1002 19:47:13.683900 14172 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa...
I1002 19:47:14.209719 14172 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1002 19:47:14.236215 14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
I1002 19:47:14.255956 14172 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1002 19:47:14.255981 14172 kic_runner.go:114] Args: [docker exec --privileged addons-486748 chown docker:docker /home/docker/.ssh/authorized_keys]
I1002 19:47:14.293510 14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
I1002 19:47:14.311968 14172 machine.go:93] provisionDockerMachine start ...
I1002 19:47:14.312070 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:14.329017 14172 main.go:141] libmachine: Using SSH client type: native
I1002 19:47:14.329242 14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 19:47:14.329256 14172 main.go:141] libmachine: About to run SSH command:
hostname
I1002 19:47:14.471463 14172 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-486748
I1002 19:47:14.471489 14172 ubuntu.go:182] provisioning hostname "addons-486748"
I1002 19:47:14.471554 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:14.488781 14172 main.go:141] libmachine: Using SSH client type: native
I1002 19:47:14.488984 14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 19:47:14.488998 14172 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-486748 && echo "addons-486748" | sudo tee /etc/hostname
I1002 19:47:14.639678 14172 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-486748
I1002 19:47:14.639775 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:14.657006 14172 main.go:141] libmachine: Using SSH client type: native
I1002 19:47:14.657273 14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 19:47:14.657294 14172 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-486748' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-486748/g' /etc/hosts;
else
echo '127.0.1.1 addons-486748' | sudo tee -a /etc/hosts;
fi
fi
I1002 19:47:14.800181 14172 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1002 19:47:14.800212 14172 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
I1002 19:47:14.800252 14172 ubuntu.go:190] setting up certificates
I1002 19:47:14.800268 14172 provision.go:84] configureAuth start
I1002 19:47:14.800322 14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
I1002 19:47:14.818158 14172 provision.go:143] copyHostCerts
I1002 19:47:14.818232 14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
I1002 19:47:14.818341 14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
I1002 19:47:14.818447 14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
I1002 19:47:14.818510 14172 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.addons-486748 san=[127.0.0.1 192.168.49.2 addons-486748 localhost minikube]
I1002 19:47:14.975696 14172 provision.go:177] copyRemoteCerts
I1002 19:47:14.975756 14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1002 19:47:14.975791 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:14.992892 14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
I1002 19:47:15.093537 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1002 19:47:15.112185 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1002 19:47:15.128814 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1002 19:47:15.144614 14172 provision.go:87] duration metric: took 344.309849ms to configureAuth
I1002 19:47:15.144644 14172 ubuntu.go:206] setting minikube options for container-runtime
I1002 19:47:15.144846 14172 config.go:182] Loaded profile config "addons-486748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:47:15.144947 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:15.162214 14172 main.go:141] libmachine: Using SSH client type: native
I1002 19:47:15.162421 14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 19:47:15.162440 14172 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1002 19:47:15.418163 14172 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1002 19:47:15.418185 14172 machine.go:96] duration metric: took 1.106195757s to provisionDockerMachine
I1002 19:47:15.418196 14172 client.go:171] duration metric: took 13.797788888s to LocalClient.Create
I1002 19:47:15.418212 14172 start.go:168] duration metric: took 13.797855415s to libmachine.API.Create "addons-486748"
I1002 19:47:15.418219 14172 start.go:294] postStartSetup for "addons-486748" (driver="docker")
I1002 19:47:15.418228 14172 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1002 19:47:15.418285 14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1002 19:47:15.418331 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:15.435548 14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
I1002 19:47:15.538706 14172 ssh_runner.go:195] Run: cat /etc/os-release
I1002 19:47:15.542216 14172 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1002 19:47:15.542251 14172 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1002 19:47:15.542266 14172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
I1002 19:47:15.542334 14172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
I1002 19:47:15.542367 14172 start.go:297] duration metric: took 124.141576ms for postStartSetup
I1002 19:47:15.542756 14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
I1002 19:47:15.560824 14172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json ...
I1002 19:47:15.561127 14172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1002 19:47:15.561171 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:15.578232 14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
I1002 19:47:15.676624 14172 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1002 19:47:15.680897 14172 start.go:129] duration metric: took 14.062740747s to createHost
I1002 19:47:15.680923 14172 start.go:84] releasing machines lock for "addons-486748", held for 14.062924618s
I1002 19:47:15.680981 14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
I1002 19:47:15.698506 14172 ssh_runner.go:195] Run: cat /version.json
I1002 19:47:15.698536 14172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1002 19:47:15.698561 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:15.698595 14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
I1002 19:47:15.717794 14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
I1002 19:47:15.719465 14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
I1002 19:47:15.871074 14172 ssh_runner.go:195] Run: systemctl --version
I1002 19:47:15.877278 14172 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1002 19:47:15.911918 14172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1002 19:47:15.916385 14172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1002 19:47:15.916452 14172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1002 19:47:15.942087 14172 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1002 19:47:15.942113 14172 start.go:496] detecting cgroup driver to use...
I1002 19:47:15.942147 14172 detect.go:190] detected "systemd" cgroup driver on host os
I1002 19:47:15.942209 14172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1002 19:47:15.957607 14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1002 19:47:15.969570 14172 docker.go:218] disabling cri-docker service (if available) ...
I1002 19:47:15.969623 14172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1002 19:47:15.985521 14172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1002 19:47:16.002428 14172 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1002 19:47:16.084633 14172 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1002 19:47:16.169148 14172 docker.go:234] disabling docker service ...
I1002 19:47:16.169206 14172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1002 19:47:16.187037 14172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1002 19:47:16.199516 14172 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1002 19:47:16.282323 14172 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1002 19:47:16.361980 14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1002 19:47:16.374469 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1002 19:47:16.388487 14172 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1002 19:47:16.388541 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.398812 14172 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
I1002 19:47:16.398896 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.407413 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.415894 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.424226 14172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1002 19:47:16.432220 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.440541 14172 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.453557 14172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 19:47:16.462034 14172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1002 19:47:16.469308 14172 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1002 19:47:16.469357 14172 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1002 19:47:16.481016 14172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1002 19:47:16.488267 14172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 19:47:16.564684 14172 ssh_runner.go:195] Run: sudo systemctl restart crio
I1002 19:47:16.666081 14172 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1002 19:47:16.666151 14172 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1002 19:47:16.670081 14172 start.go:564] Will wait 60s for crictl version
I1002 19:47:16.670138 14172 ssh_runner.go:195] Run: which crictl
I1002 19:47:16.673464 14172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1002 19:47:16.696583 14172 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.34.1
RuntimeApiVersion: v1
I1002 19:47:16.696726 14172 ssh_runner.go:195] Run: crio --version
I1002 19:47:16.722808 14172 ssh_runner.go:195] Run: crio --version
I1002 19:47:16.751333 14172 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1002 19:47:16.752754 14172 cli_runner.go:164] Run: docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 19:47:16.770947 14172 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1002 19:47:16.775007 14172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 19:47:16.785427 14172 kubeadm.go:883] updating cluster {Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1002 19:47:16.785596 14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 19:47:16.785683 14172 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:47:16.817800 14172 crio.go:514] all images are preloaded for cri-o runtime.
I1002 19:47:16.817820 14172 crio.go:433] Images already preloaded, skipping extraction
I1002 19:47:16.817869 14172 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:47:16.843260 14172 crio.go:514] all images are preloaded for cri-o runtime.
I1002 19:47:16.843282 14172 cache_images.go:85] Images are preloaded, skipping loading
I1002 19:47:16.843290 14172 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
I1002 19:47:16.843370 14172 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-486748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1002 19:47:16.843431 14172 ssh_runner.go:195] Run: crio config
I1002 19:47:16.886753 14172 cni.go:84] Creating CNI manager for ""
I1002 19:47:16.886782 14172 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 19:47:16.886800 14172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1002 19:47:16.886821 14172 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-486748 NodeName:addons-486748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1002 19:47:16.886971 14172 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-486748"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1002 19:47:16.887039 14172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1002 19:47:16.894881 14172 binaries.go:44] Found k8s binaries, skipping transfer
I1002 19:47:16.894956 14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1002 19:47:16.902634 14172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1002 19:47:16.914966 14172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1002 19:47:16.930275 14172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
I1002 19:47:16.942556 14172 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1002 19:47:16.946145 14172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 19:47:16.955790 14172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 19:47:17.027129 14172 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1002 19:47:17.050908 14172 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748 for IP: 192.168.49.2
I1002 19:47:17.050932 14172 certs.go:195] generating shared ca certs ...
I1002 19:47:17.050953 14172 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.051078 14172 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
I1002 19:47:17.386505 14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt ...
I1002 19:47:17.386536 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt: {Name:mk786afdd62ef3a772faf0132a7a1ec7f6ce72dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.386725 14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key ...
I1002 19:47:17.386744 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key: {Name:mk2d72d3a4f6d4419e21e1fad643fb52f178516c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.386825 14172 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
I1002 19:47:17.454269 14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt ...
I1002 19:47:17.454296 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt: {Name:mk1e303a39d725289fbf8ee759df3fa9d45b3854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.454446 14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key ...
I1002 19:47:17.454456 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key: {Name:mk3548622bc975a3985c07a4d3c6f05eb739b141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.454518 14172 certs.go:257] generating profile certs ...
I1002 19:47:17.454572 14172 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key
I1002 19:47:17.454586 14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt with IP's: []
I1002 19:47:17.589435 14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt ...
I1002 19:47:17.589466 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt: {Name:mkda052f537b3a8fe8f52ad21ef111e7ec46e7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.589655 14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key ...
I1002 19:47:17.589667 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key: {Name:mk3e2aab61de07ec774bed14a198f947b6c813ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.589744 14172 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29
I1002 19:47:17.589764 14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1002 19:47:17.885024 14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 ...
I1002 19:47:17.885054 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29: {Name:mk7ce1b5544769da61acbaf89af97631724f0bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.885215 14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29 ...
I1002 19:47:17.885228 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29: {Name:mkdc658fe27e2c44d1169b7de754f9a79aa2d243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:17.885293 14172 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt
I1002 19:47:17.885368 14172 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key
I1002 19:47:17.885415 14172 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key
I1002 19:47:17.885429 14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt with IP's: []
I1002 19:47:18.275309 14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt ...
I1002 19:47:18.275345 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt: {Name:mk27b2d4b020fc9e8e22760e08299eb5542b2473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:18.275538 14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key ...
I1002 19:47:18.275550 14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key: {Name:mk79533042f36070d179aa737abedeabdfe5f0e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 19:47:18.275801 14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
I1002 19:47:18.275842 14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
I1002 19:47:18.275869 14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
I1002 19:47:18.275893 14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
I1002 19:47:18.276458 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1002 19:47:18.294136 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1002 19:47:18.310676 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1002 19:47:18.327504 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1002 19:47:18.343634 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1002 19:47:18.360194 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1002 19:47:18.376638 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1002 19:47:18.393694 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1002 19:47:18.410259 14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1002 19:47:18.428359 14172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1002 19:47:18.440533 14172 ssh_runner.go:195] Run: openssl version
I1002 19:47:18.446551 14172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1002 19:47:18.457523 14172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1002 19:47:18.461348 14172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 2 19:47 /usr/share/ca-certificates/minikubeCA.pem
I1002 19:47:18.461397 14172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1002 19:47:18.495069 14172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1002 19:47:18.503632 14172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1002 19:47:18.507446 14172 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1002 19:47:18.507497 14172 kubeadm.go:400] StartCluster: {Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 19:47:18.507559 14172 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1002 19:47:18.507623 14172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1002 19:47:18.535082 14172 cri.go:89] found id: ""
I1002 19:47:18.535161 14172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1002 19:47:18.542948 14172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1002 19:47:18.550564 14172 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 19:47:18.550631 14172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 19:47:18.557899 14172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 19:47:18.557917 14172 kubeadm.go:157] found existing configuration files:
I1002 19:47:18.557952 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 19:47:18.565100 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 19:47:18.565151 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 19:47:18.571947 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 19:47:18.578752 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 19:47:18.578810 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 19:47:18.585583 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 19:47:18.592729 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 19:47:18.592779 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 19:47:18.599549 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 19:47:18.606415 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 19:47:18.606478 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 19:47:18.613140 14172 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 19:47:18.681509 14172 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 19:47:18.737422 14172 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 19:51:23.249125 14172 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1002 19:51:23.249257 14172 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1002 19:51:23.251457 14172 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 19:51:23.251523 14172 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 19:51:23.251630 14172 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 19:51:23.251738 14172 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 19:51:23.251803 14172 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 19:51:23.251843 14172 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 19:51:23.251901 14172 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 19:51:23.251969 14172 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 19:51:23.252035 14172 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 19:51:23.252079 14172 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 19:51:23.252119 14172 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 19:51:23.252174 14172 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 19:51:23.252254 14172 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 19:51:23.252380 14172 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 19:51:23.252560 14172 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 19:51:23.252701 14172 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 19:51:23.252810 14172 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 19:51:23.255322 14172 out.go:252] - Generating certificates and keys ...
I1002 19:51:23.255409 14172 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 19:51:23.255519 14172 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 19:51:23.255616 14172 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1002 19:51:23.255729 14172 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1002 19:51:23.255813 14172 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1002 19:51:23.255892 14172 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1002 19:51:23.255983 14172 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1002 19:51:23.256123 14172 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 19:51:23.256196 14172 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1002 19:51:23.256367 14172 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 19:51:23.256467 14172 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1002 19:51:23.256528 14172 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1002 19:51:23.256567 14172 kubeadm.go:318] [certs] Generating "sa" key and public key
I1002 19:51:23.256716 14172 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 19:51:23.256792 14172 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 19:51:23.256861 14172 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 19:51:23.256940 14172 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 19:51:23.257047 14172 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 19:51:23.257137 14172 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 19:51:23.257245 14172 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 19:51:23.257342 14172 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 19:51:23.259112 14172 out.go:252] - Booting up control plane ...
I1002 19:51:23.259225 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 19:51:23.259350 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 19:51:23.259432 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 19:51:23.259514 14172 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 19:51:23.259587 14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 19:51:23.259726 14172 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 19:51:23.259837 14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 19:51:23.259900 14172 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 19:51:23.260072 14172 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 19:51:23.260213 14172 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 19:51:23.260293 14172 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.755891ms
I1002 19:51:23.260396 14172 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 19:51:23.260488 14172 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 19:51:23.260607 14172 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 19:51:23.260742 14172 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 19:51:23.260872 14172 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
I1002 19:51:23.260976 14172 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
I1002 19:51:23.261081 14172 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
I1002 19:51:23.261090 14172 kubeadm.go:318]
I1002 19:51:23.261198 14172 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 19:51:23.261297 14172 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 19:51:23.261410 14172 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 19:51:23.261533 14172 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 19:51:23.261622 14172 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 19:51:23.261726 14172 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 19:51:23.261750 14172 kubeadm.go:318]
W1002 19:51:23.261900 14172 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.755891ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
[control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.755891ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
[control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
I1002 19:51:23.261986 14172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I1002 19:51:23.703790 14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1002 19:51:23.716020 14172 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 19:51:23.716072 14172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 19:51:23.723743 14172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 19:51:23.723761 14172 kubeadm.go:157] found existing configuration files:
I1002 19:51:23.723801 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 19:51:23.731372 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 19:51:23.731421 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 19:51:23.738512 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 19:51:23.746362 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 19:51:23.746413 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 19:51:23.753680 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 19:51:23.760844 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 19:51:23.760881 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 19:51:23.767473 14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 19:51:23.774515 14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 19:51:23.774552 14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 19:51:23.781363 14172 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 19:51:23.815035 14172 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 19:51:23.815109 14172 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 19:51:23.833732 14172 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 19:51:23.833829 14172 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 19:51:23.833880 14172 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 19:51:23.833938 14172 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 19:51:23.833989 14172 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 19:51:23.834031 14172 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 19:51:23.834101 14172 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 19:51:23.834186 14172 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 19:51:23.834262 14172 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 19:51:23.834331 14172 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 19:51:23.834404 14172 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 19:51:23.887155 14172 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 19:51:23.887253 14172 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 19:51:23.887375 14172 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 19:51:23.893210 14172 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 19:51:23.896460 14172 out.go:252] - Generating certificates and keys ...
I1002 19:51:23.896564 14172 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 19:51:23.896683 14172 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 19:51:23.896766 14172 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1002 19:51:23.896838 14172 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
I1002 19:51:23.896957 14172 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
I1002 19:51:23.897044 14172 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
I1002 19:51:23.897132 14172 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
I1002 19:51:23.897215 14172 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
I1002 19:51:23.897293 14172 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1002 19:51:23.897387 14172 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1002 19:51:23.897424 14172 kubeadm.go:318] [certs] Using the existing "sa" key
I1002 19:51:23.897469 14172 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 19:51:24.497248 14172 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 19:51:24.717728 14172 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 19:51:24.811928 14172 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 19:51:25.063570 14172 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 19:51:25.151082 14172 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 19:51:25.151462 14172 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 19:51:25.153580 14172 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 19:51:25.155601 14172 out.go:252] - Booting up control plane ...
I1002 19:51:25.155713 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 19:51:25.155841 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 19:51:25.156725 14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 19:51:25.169495 14172 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 19:51:25.169587 14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 19:51:25.175662 14172 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 19:51:25.175909 14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 19:51:25.175955 14172 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 19:51:25.274141 14172 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 19:51:25.274297 14172 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 19:51:25.775812 14172 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.811567ms
I1002 19:51:25.778423 14172 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 19:51:25.778548 14172 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 19:51:25.778637 14172 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 19:51:25.778775 14172 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 19:55:25.779313 14172 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
I1002 19:55:25.779534 14172 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
I1002 19:55:25.779756 14172 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
I1002 19:55:25.779832 14172 kubeadm.go:318]
I1002 19:55:25.780094 14172 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 19:55:25.780274 14172 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 19:55:25.780428 14172 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 19:55:25.780593 14172 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 19:55:25.780793 14172 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 19:55:25.781023 14172 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 19:55:25.781038 14172 kubeadm.go:318]
I1002 19:55:25.783122 14172 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 19:55:25.783256 14172 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 19:55:25.783906 14172 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1002 19:55:25.784012 14172 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1002 19:55:25.784103 14172 kubeadm.go:402] duration metric: took 8m7.276606859s to StartCluster
I1002 19:55:25.784157 14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1002 19:55:25.784220 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1002 19:55:25.809866 14172 cri.go:89] found id: ""
I1002 19:55:25.809904 14172 logs.go:282] 0 containers: []
W1002 19:55:25.809914 14172 logs.go:284] No container was found matching "kube-apiserver"
I1002 19:55:25.809924 14172 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1002 19:55:25.809989 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1002 19:55:25.834613 14172 cri.go:89] found id: ""
I1002 19:55:25.834636 14172 logs.go:282] 0 containers: []
W1002 19:55:25.834644 14172 logs.go:284] No container was found matching "etcd"
I1002 19:55:25.834666 14172 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1002 19:55:25.834719 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1002 19:55:25.859621 14172 cri.go:89] found id: ""
I1002 19:55:25.859642 14172 logs.go:282] 0 containers: []
W1002 19:55:25.859666 14172 logs.go:284] No container was found matching "coredns"
I1002 19:55:25.859674 14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1002 19:55:25.859724 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1002 19:55:25.884720 14172 cri.go:89] found id: ""
I1002 19:55:25.884746 14172 logs.go:282] 0 containers: []
W1002 19:55:25.884756 14172 logs.go:284] No container was found matching "kube-scheduler"
I1002 19:55:25.884764 14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1002 19:55:25.884811 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1002 19:55:25.910002 14172 cri.go:89] found id: ""
I1002 19:55:25.910022 14172 logs.go:282] 0 containers: []
W1002 19:55:25.910029 14172 logs.go:284] No container was found matching "kube-proxy"
I1002 19:55:25.910034 14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1002 19:55:25.910083 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1002 19:55:25.935352 14172 cri.go:89] found id: ""
I1002 19:55:25.935373 14172 logs.go:282] 0 containers: []
W1002 19:55:25.935381 14172 logs.go:284] No container was found matching "kube-controller-manager"
I1002 19:55:25.935387 14172 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1002 19:55:25.935429 14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1002 19:55:25.960341 14172 cri.go:89] found id: ""
I1002 19:55:25.960364 14172 logs.go:282] 0 containers: []
W1002 19:55:25.960372 14172 logs.go:284] No container was found matching "kindnet"
I1002 19:55:25.960381 14172 logs.go:123] Gathering logs for dmesg ...
I1002 19:55:25.960394 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1002 19:55:25.971311 14172 logs.go:123] Gathering logs for describe nodes ...
I1002 19:55:25.971334 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1002 19:55:26.027119 14172 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1002 19:55:26.019599 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.020109 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.021739 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.022159 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.024489 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1002 19:55:26.019599 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.020109 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.021739 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.022159 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 19:55:26.024489 2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1002 19:55:26.027138 14172 logs.go:123] Gathering logs for CRI-O ...
I1002 19:55:26.027149 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1002 19:55:26.088066 14172 logs.go:123] Gathering logs for container status ...
I1002 19:55:26.088099 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1002 19:55:26.115405 14172 logs.go:123] Gathering logs for kubelet ...
I1002 19:55:26.115431 14172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1002 19:55:26.182640 14172 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.811567ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 19:55:26.182714 14172 out.go:285] *
*
W1002 19:55:26.182773 14172 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.811567ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.811567ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 19:55:26.182787 14172 out.go:285] *
*
W1002 19:55:26.184528 14172 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 19:55:26.187968 14172 out.go:203]
W1002 19:55:26.189180 14172 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.811567ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.811567ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 19:55:26.189206 14172 out.go:285] *
*
I1002 19:55:26.190431 14172 out.go:203]
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (517.17s)