=== RUN TestAddons/Setup
addons_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m34.443995126s)
-- stdout --
* [addons-139298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21683
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "addons-139298" primary control-plane node in "addons-139298" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
-- /stdout --
** stderr **
I1009 18:39:29.416317 142849 out.go:360] Setting OutFile to fd 1 ...
I1009 18:39:29.416570 142849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:39:29.416579 142849 out.go:374] Setting ErrFile to fd 2...
I1009 18:39:29.416583 142849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:39:29.416799 142849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-137890/.minikube/bin
I1009 18:39:29.417333 142849 out.go:368] Setting JSON to false
I1009 18:39:29.418260 142849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1318,"bootTime":1760033851,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1009 18:39:29.418358 142849 start.go:143] virtualization: kvm guest
I1009 18:39:29.420525 142849 out.go:179] * [addons-139298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1009 18:39:29.421984 142849 notify.go:221] Checking for updates...
I1009 18:39:29.422026 142849 out.go:179] - MINIKUBE_LOCATION=21683
I1009 18:39:29.423407 142849 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1009 18:39:29.424940 142849 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21683-137890/kubeconfig
I1009 18:39:29.426449 142849 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-137890/.minikube
I1009 18:39:29.427922 142849 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1009 18:39:29.429301 142849 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1009 18:39:29.430873 142849 driver.go:422] Setting default libvirt URI to qemu:///system
I1009 18:39:29.454978 142849 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
I1009 18:39:29.455071 142849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 18:39:29.518195 142849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:29.507304502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 18:39:29.518296 142849 docker.go:319] overlay module found
I1009 18:39:29.520102 142849 out.go:179] * Using the docker driver based on user configuration
I1009 18:39:29.521434 142849 start.go:309] selected driver: docker
I1009 18:39:29.521453 142849 start.go:930] validating driver "docker" against <nil>
I1009 18:39:29.521465 142849 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1009 18:39:29.522156 142849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 18:39:29.586679 142849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 18:39:29.57655356 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652158464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 18:39:29.586836 142849 start_flags.go:328] no existing cluster config was found, will generate one from the flags
I1009 18:39:29.587043 142849 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1009 18:39:29.588866 142849 out.go:179] * Using Docker driver with root privileges
I1009 18:39:29.590163 142849 cni.go:84] Creating CNI manager for ""
I1009 18:39:29.590212 142849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1009 18:39:29.590224 142849 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
I1009 18:39:29.590297 142849 start.go:353] cluster config:
{Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
I1009 18:39:29.592015 142849 out.go:179] * Starting "addons-139298" primary control-plane node in "addons-139298" cluster
I1009 18:39:29.593397 142849 cache.go:123] Beginning downloading kic base image for docker with crio
I1009 18:39:29.594829 142849 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1009 18:39:29.596121 142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:39:29.596154 142849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1009 18:39:29.596162 142849 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1009 18:39:29.596171 142849 cache.go:58] Caching tarball of preloaded images
I1009 18:39:29.596257 142849 preload.go:233] Found /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1009 18:39:29.596267 142849 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1009 18:39:29.596570 142849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json ...
I1009 18:39:29.596601 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json: {Name:mk74c72bc049148ef11108d8a71c51887cf15c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:29.612903 142849 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
I1009 18:39:29.613024 142849 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
I1009 18:39:29.613041 142849 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
I1009 18:39:29.613045 142849 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
I1009 18:39:29.613055 142849 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
I1009 18:39:29.613062 142849 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
I1009 18:39:42.574172 142849 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
I1009 18:39:42.574235 142849 cache.go:232] Successfully downloaded all kic artifacts
I1009 18:39:42.574286 142849 start.go:361] acquireMachinesLock for addons-139298: {Name:mkaa7e9ae30ef19808b4315a06326fba69a900ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 18:39:42.575033 142849 start.go:365] duration metric: took 710.33µs to acquireMachinesLock for "addons-139298"
I1009 18:39:42.575077 142849 start.go:94] Provisioning new machine with config: &{Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1009 18:39:42.575154 142849 start.go:126] createHost starting for "" (driver="docker")
I1009 18:39:42.655739 142849 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1009 18:39:42.656037 142849 start.go:160] libmachine.API.Create for "addons-139298" (driver="docker")
I1009 18:39:42.656070 142849 client.go:168] LocalClient.Create starting
I1009 18:39:42.656193 142849 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem
I1009 18:39:42.734530 142849 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem
I1009 18:39:42.944837 142849 cli_runner.go:164] Run: docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 18:39:42.962609 142849 cli_runner.go:211] docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 18:39:42.962703 142849 network_create.go:284] running [docker network inspect addons-139298] to gather additional debugging logs...
I1009 18:39:42.962724 142849 cli_runner.go:164] Run: docker network inspect addons-139298
W1009 18:39:42.979456 142849 cli_runner.go:211] docker network inspect addons-139298 returned with exit code 1
I1009 18:39:42.979493 142849 network_create.go:287] error running [docker network inspect addons-139298]: docker network inspect addons-139298: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-139298 not found
I1009 18:39:42.979506 142849 network_create.go:289] output of [docker network inspect addons-139298]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-139298 not found
** /stderr **
I1009 18:39:42.979586 142849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 18:39:42.996840 142849 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019a2f30}
I1009 18:39:42.996878 142849 network_create.go:124] attempt to create docker network addons-139298 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1009 18:39:42.996925 142849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-139298 addons-139298
I1009 18:39:43.055145 142849 network_create.go:108] docker network addons-139298 192.168.49.0/24 created
I1009 18:39:43.055177 142849 kic.go:121] calculated static IP "192.168.49.2" for the "addons-139298" container
I1009 18:39:43.055257 142849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1009 18:39:43.071774 142849 cli_runner.go:164] Run: docker volume create addons-139298 --label name.minikube.sigs.k8s.io=addons-139298 --label created_by.minikube.sigs.k8s.io=true
I1009 18:39:43.094360 142849 oci.go:103] Successfully created a docker volume addons-139298
I1009 18:39:43.094473 142849 cli_runner.go:164] Run: docker run --rm --name addons-139298-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --entrypoint /usr/bin/test -v addons-139298:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1009 18:39:45.353646 142849 cli_runner.go:217] Completed: docker run --rm --name addons-139298-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --entrypoint /usr/bin/test -v addons-139298:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.259123555s)
I1009 18:39:45.353706 142849 oci.go:107] Successfully prepared a docker volume addons-139298
I1009 18:39:45.353745 142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:39:45.353775 142849 kic.go:194] Starting extracting preloaded images to volume ...
I1009 18:39:45.353837 142849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-139298:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1009 18:39:49.753281 142849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-137890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-139298:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39939666s)
I1009 18:39:49.753312 142849 kic.go:203] duration metric: took 4.39953526s to extract preloaded images to volume ...
W1009 18:39:49.753422 142849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1009 18:39:49.753464 142849 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1009 18:39:49.753514 142849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1009 18:39:49.812917 142849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-139298 --name addons-139298 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-139298 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-139298 --network addons-139298 --ip 192.168.49.2 --volume addons-139298:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1009 18:39:50.105989 142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Running}}
I1009 18:39:50.125263 142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
I1009 18:39:50.144553 142849 cli_runner.go:164] Run: docker exec addons-139298 stat /var/lib/dpkg/alternatives/iptables
I1009 18:39:50.194428 142849 oci.go:144] the created container "addons-139298" has a running status.
I1009 18:39:50.194461 142849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa...
I1009 18:39:50.429782 142849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1009 18:39:50.464319 142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
I1009 18:39:50.483524 142849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1009 18:39:50.483550 142849 kic_runner.go:114] Args: [docker exec --privileged addons-139298 chown docker:docker /home/docker/.ssh/authorized_keys]
I1009 18:39:50.531004 142849 cli_runner.go:164] Run: docker container inspect addons-139298 --format={{.State.Status}}
I1009 18:39:50.550668 142849 machine.go:93] provisionDockerMachine start ...
I1009 18:39:50.550784 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:50.570460 142849 main.go:141] libmachine: Using SSH client type: native
I1009 18:39:50.570685 142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1009 18:39:50.570697 142849 main.go:141] libmachine: About to run SSH command:
hostname
I1009 18:39:50.721476 142849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-139298
I1009 18:39:50.721537 142849 ubuntu.go:182] provisioning hostname "addons-139298"
I1009 18:39:50.721611 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:50.740695 142849 main.go:141] libmachine: Using SSH client type: native
I1009 18:39:50.740914 142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1009 18:39:50.740928 142849 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-139298 && echo "addons-139298" | sudo tee /etc/hostname
I1009 18:39:50.899338 142849 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-139298
I1009 18:39:50.899447 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:50.917112 142849 main.go:141] libmachine: Using SSH client type: native
I1009 18:39:50.917334 142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1009 18:39:50.917351 142849 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-139298' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-139298/g' /etc/hosts;
else
echo '127.0.1.1 addons-139298' | sudo tee -a /etc/hosts;
fi
fi
I1009 18:39:51.064444 142849 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1009 18:39:51.064475 142849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-137890/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-137890/.minikube}
I1009 18:39:51.064517 142849 ubuntu.go:190] setting up certificates
I1009 18:39:51.064536 142849 provision.go:84] configureAuth start
I1009 18:39:51.064594 142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
I1009 18:39:51.082326 142849 provision.go:143] copyHostCerts
I1009 18:39:51.082417 142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/ca.pem (1078 bytes)
I1009 18:39:51.082533 142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/cert.pem (1123 bytes)
I1009 18:39:51.082592 142849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-137890/.minikube/key.pem (1675 bytes)
I1009 18:39:51.082644 142849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem org=jenkins.addons-139298 san=[127.0.0.1 192.168.49.2 addons-139298 localhost minikube]
I1009 18:39:51.345908 142849 provision.go:177] copyRemoteCerts
I1009 18:39:51.345969 142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1009 18:39:51.346017 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:51.364326 142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
I1009 18:39:51.469087 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1009 18:39:51.488563 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1009 18:39:51.506122 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1009 18:39:51.523714 142849 provision.go:87] duration metric: took 459.158853ms to configureAuth
I1009 18:39:51.523752 142849 ubuntu.go:206] setting minikube options for container-runtime
I1009 18:39:51.523932 142849 config.go:182] Loaded profile config "addons-139298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:39:51.524032 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:51.542486 142849 main.go:141] libmachine: Using SSH client type: native
I1009 18:39:51.542707 142849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1009 18:39:51.542725 142849 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1009 18:39:51.804333 142849 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1009 18:39:51.804361 142849 machine.go:96] duration metric: took 1.253670246s to provisionDockerMachine
I1009 18:39:51.804371 142849 client.go:171] duration metric: took 9.148295347s to LocalClient.Create
I1009 18:39:51.804409 142849 start.go:168] duration metric: took 9.148374388s to libmachine.API.Create "addons-139298"
I1009 18:39:51.804420 142849 start.go:294] postStartSetup for "addons-139298" (driver="docker")
I1009 18:39:51.804433 142849 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1009 18:39:51.804487 142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1009 18:39:51.804537 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:51.823166 142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
I1009 18:39:51.928444 142849 ssh_runner.go:195] Run: cat /etc/os-release
I1009 18:39:51.932029 142849 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1009 18:39:51.932058 142849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1009 18:39:51.932073 142849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/addons for local assets ...
I1009 18:39:51.932140 142849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-137890/.minikube/files for local assets ...
I1009 18:39:51.932175 142849 start.go:297] duration metric: took 127.747641ms for postStartSetup
I1009 18:39:51.932508 142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
I1009 18:39:51.950046 142849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/config.json ...
I1009 18:39:51.950310 142849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1009 18:39:51.950351 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:51.969058 142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
I1009 18:39:52.069900 142849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1009 18:39:52.074436 142849 start.go:129] duration metric: took 9.499262716s to createHost
I1009 18:39:52.074462 142849 start.go:84] releasing machines lock for "addons-139298", held for 9.499405215s
I1009 18:39:52.074536 142849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-139298
I1009 18:39:52.091847 142849 ssh_runner.go:195] Run: cat /version.json
I1009 18:39:52.091879 142849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1009 18:39:52.091894 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:52.091945 142849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-139298
I1009 18:39:52.110072 142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
I1009 18:39:52.110738 142849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-137890/.minikube/machines/addons-139298/id_rsa Username:docker}
I1009 18:39:52.263605 142849 ssh_runner.go:195] Run: systemctl --version
I1009 18:39:52.269960 142849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1009 18:39:52.305416 142849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1009 18:39:52.310236 142849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1009 18:39:52.310297 142849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1009 18:39:52.337865 142849 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1009 18:39:52.337887 142849 start.go:496] detecting cgroup driver to use...
I1009 18:39:52.337920 142849 detect.go:190] detected "systemd" cgroup driver on host os
I1009 18:39:52.337969 142849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1009 18:39:52.354977 142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1009 18:39:52.368019 142849 docker.go:218] disabling cri-docker service (if available) ...
I1009 18:39:52.368085 142849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1009 18:39:52.385214 142849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1009 18:39:52.403678 142849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1009 18:39:52.486629 142849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1009 18:39:52.575347 142849 docker.go:234] disabling docker service ...
I1009 18:39:52.575464 142849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1009 18:39:52.595394 142849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1009 18:39:52.608287 142849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1009 18:39:52.696154 142849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1009 18:39:52.778545 142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1009 18:39:52.791534 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1009 18:39:52.806254 142849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1009 18:39:52.806322 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.817298 142849 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
I1009 18:39:52.817366 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.826395 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.835443 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.844309 142849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1009 18:39:52.852771 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.861654 142849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.875723 142849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1009 18:39:52.885075 142849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1009 18:39:52.892588 142849 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1009 18:39:52.892651 142849 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1009 18:39:52.905905 142849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1009 18:39:52.913851 142849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1009 18:39:52.993186 142849 ssh_runner.go:195] Run: sudo systemctl restart crio
I1009 18:39:53.100800 142849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1009 18:39:53.100891 142849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1009 18:39:53.105077 142849 start.go:564] Will wait 60s for crictl version
I1009 18:39:53.105137 142849 ssh_runner.go:195] Run: which crictl
I1009 18:39:53.108706 142849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1009 18:39:53.134103 142849 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.34.1
RuntimeApiVersion: v1
I1009 18:39:53.134231 142849 ssh_runner.go:195] Run: crio --version
I1009 18:39:53.163457 142849 ssh_runner.go:195] Run: crio --version
I1009 18:39:53.194280 142849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1009 18:39:53.195522 142849 cli_runner.go:164] Run: docker network inspect addons-139298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 18:39:53.212147 142849 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1009 18:39:53.216312 142849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1009 18:39:53.226475 142849 kubeadm.go:883] updating cluster {Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1009 18:39:53.226607 142849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:39:53.226650 142849 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:39:53.259127 142849 crio.go:514] all images are preloaded for cri-o runtime.
I1009 18:39:53.259157 142849 crio.go:433] Images already preloaded, skipping extraction
I1009 18:39:53.259220 142849 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:39:53.285739 142849 crio.go:514] all images are preloaded for cri-o runtime.
I1009 18:39:53.285763 142849 cache_images.go:85] Images are preloaded, skipping loading
I1009 18:39:53.285773 142849 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
I1009 18:39:53.285856 142849 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-139298 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1009 18:39:53.285916 142849 ssh_runner.go:195] Run: crio config
I1009 18:39:53.330789 142849 cni.go:84] Creating CNI manager for ""
I1009 18:39:53.330808 142849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1009 18:39:53.330828 142849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1009 18:39:53.330855 142849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-139298 NodeName:addons-139298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1009 18:39:53.331019 142849 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-139298"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1009 18:39:53.331093 142849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1009 18:39:53.339832 142849 binaries.go:44] Found k8s binaries, skipping transfer
I1009 18:39:53.339892 142849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1009 18:39:53.348481 142849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1009 18:39:53.361395 142849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1009 18:39:53.377002 142849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
I1009 18:39:53.390030 142849 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1009 18:39:53.393824 142849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1009 18:39:53.404225 142849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1009 18:39:53.487307 142849 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1009 18:39:53.507940 142849 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298 for IP: 192.168.49.2
I1009 18:39:53.507969 142849 certs.go:195] generating shared ca certs ...
I1009 18:39:53.508006 142849 certs.go:227] acquiring lock for ca certs: {Name:mkb62c96cf33dc4f9ff25fea834424a1e223b24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:53.508941 142849 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key
I1009 18:39:53.638790 142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt ...
I1009 18:39:53.638824 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt: {Name:mk926486f9e0523ea70fea9163d972006ea77f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:53.639707 142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key ...
I1009 18:39:53.639733 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key: {Name:mkbb7fdee2e3223ce98cc7eb1427bb63146a4001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:53.639861 142849 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key
I1009 18:39:53.931315 142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt ...
I1009 18:39:53.931351 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt: {Name:mk8b123b71e93c7266be83f7db2711ce2438ac01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:53.932540 142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key ...
I1009 18:39:53.932566 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key: {Name:mkc9fa7331fb59618160a9960ed0d3f8d4cab034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:53.932717 142849 certs.go:257] generating profile certs ...
I1009 18:39:53.932792 142849 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key
I1009 18:39:53.932808 142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt with IP's: []
I1009 18:39:54.151720 142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt ...
I1009 18:39:54.151757 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.crt: {Name:mk90d6b29686c2412fb39404cfcdbd54eafe5bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.151969 142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key ...
I1009 18:39:54.151986 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/client.key: {Name:mk9a76227a473e66c172f016a0ba484179fde245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.152100 142849 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899
I1009 18:39:54.152124 142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1009 18:39:54.515706 142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 ...
I1009 18:39:54.515740 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899: {Name:mkee55d6c031b17a28894fabb0580ae72888c333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.515941 142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899 ...
I1009 18:39:54.515957 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899: {Name:mk920799f3ab74f80e2e3e1063eddc59a20dc5e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.516037 142849 certs.go:382] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt.4cc4f899 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt
I1009 18:39:54.516131 142849 certs.go:386] copying /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key.4cc4f899 -> /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key
I1009 18:39:54.516180 142849 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key
I1009 18:39:54.516198 142849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt with IP's: []
I1009 18:39:54.632996 142849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt ...
I1009 18:39:54.633030 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt: {Name:mk9fcf902e178624332654eb2c089642aaaec6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.633787 142849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key ...
I1009 18:39:54.633807 142849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key: {Name:mk2d7d42f26e04dd7dfaf8057702acf3314ab3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:39:54.634503 142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca-key.pem (1675 bytes)
I1009 18:39:54.634543 142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/ca.pem (1078 bytes)
I1009 18:39:54.634564 142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/cert.pem (1123 bytes)
I1009 18:39:54.634586 142849 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-137890/.minikube/certs/key.pem (1675 bytes)
I1009 18:39:54.635313 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1009 18:39:54.654055 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1009 18:39:54.671373 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1009 18:39:54.688697 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1009 18:39:54.706111 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1009 18:39:54.722885 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1009 18:39:54.740033 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1009 18:39:54.757144 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/profiles/addons-139298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1009 18:39:54.774060 142849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-137890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1009 18:39:54.793234 142849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1009 18:39:54.806165 142849 ssh_runner.go:195] Run: openssl version
I1009 18:39:54.812416 142849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1009 18:39:54.824578 142849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1009 18:39:54.828811 142849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 9 18:39 /usr/share/ca-certificates/minikubeCA.pem
I1009 18:39:54.828878 142849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1009 18:39:54.863742 142849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1009 18:39:54.873297 142849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1009 18:39:54.877286 142849 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1009 18:39:54.877352 142849 kubeadm.go:400] StartCluster: {Name:addons-139298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-139298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1009 18:39:54.877449 142849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1009 18:39:54.877523 142849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1009 18:39:54.906831 142849 cri.go:89] found id: ""
I1009 18:39:54.906895 142849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1009 18:39:54.915263 142849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1009 18:39:54.923559 142849 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1009 18:39:54.923626 142849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1009 18:39:54.931404 142849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1009 18:39:54.931427 142849 kubeadm.go:157] found existing configuration files:
I1009 18:39:54.931467 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1009 18:39:54.938962 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1009 18:39:54.939028 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1009 18:39:54.946419 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1009 18:39:54.954219 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1009 18:39:54.954267 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1009 18:39:54.961617 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1009 18:39:54.968844 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1009 18:39:54.968900 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1009 18:39:54.975926 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1009 18:39:54.983279 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1009 18:39:54.983343 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1009 18:39:54.990468 142849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1009 18:39:55.062184 142849 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1009 18:39:55.121091 142849 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1009 18:44:00.178199 142849 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
I1009 18:44:00.178368 142849 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1009 18:44:00.181840 142849 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1009 18:44:00.181930 142849 kubeadm.go:318] [preflight] Running pre-flight checks
I1009 18:44:00.182064 142849 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1009 18:44:00.182134 142849 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1009 18:44:00.182184 142849 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1009 18:44:00.182253 142849 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1009 18:44:00.182316 142849 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1009 18:44:00.182408 142849 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1009 18:44:00.182472 142849 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1009 18:44:00.182513 142849 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1009 18:44:00.182565 142849 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1009 18:44:00.182606 142849 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1009 18:44:00.182647 142849 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1009 18:44:00.182724 142849 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1009 18:44:00.182855 142849 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1009 18:44:00.182955 142849 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1009 18:44:00.183016 142849 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1009 18:44:00.185577 142849 out.go:252] - Generating certificates and keys ...
I1009 18:44:00.185654 142849 kubeadm.go:318] [certs] Using existing ca certificate authority
I1009 18:44:00.185727 142849 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1009 18:44:00.185786 142849 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1009 18:44:00.185838 142849 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1009 18:44:00.185892 142849 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1009 18:44:00.185937 142849 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1009 18:44:00.185983 142849 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1009 18:44:00.186154 142849 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1009 18:44:00.186228 142849 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1009 18:44:00.186395 142849 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1009 18:44:00.186503 142849 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1009 18:44:00.186569 142849 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1009 18:44:00.186612 142849 kubeadm.go:318] [certs] Generating "sa" key and public key
I1009 18:44:00.186663 142849 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1009 18:44:00.186713 142849 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1009 18:44:00.186766 142849 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1009 18:44:00.186818 142849 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1009 18:44:00.186889 142849 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1009 18:44:00.186941 142849 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1009 18:44:00.187015 142849 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1009 18:44:00.187083 142849 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1009 18:44:00.188574 142849 out.go:252] - Booting up control plane ...
I1009 18:44:00.188650 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1009 18:44:00.188720 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1009 18:44:00.188783 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1009 18:44:00.188885 142849 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1009 18:44:00.188964 142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1009 18:44:00.189056 142849 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1009 18:44:00.189146 142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1009 18:44:00.189190 142849 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1009 18:44:00.189299 142849 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1009 18:44:00.189490 142849 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1009 18:44:00.189596 142849 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001006049s
I1009 18:44:00.189741 142849 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1009 18:44:00.189830 142849 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1009 18:44:00.189954 142849 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1009 18:44:00.190024 142849 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1009 18:44:00.190085 142849 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
I1009 18:44:00.190152 142849 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
I1009 18:44:00.190226 142849 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
I1009 18:44:00.190232 142849 kubeadm.go:318]
I1009 18:44:00.190308 142849 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1009 18:44:00.190404 142849 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1009 18:44:00.190483 142849 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1009 18:44:00.190579 142849 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1009 18:44:00.190712 142849 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1009 18:44:00.190792 142849 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1009 18:44:00.190851 142849 kubeadm.go:318]
W1009 18:44:00.191013 142849 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001006049s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
[control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-139298 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001006049s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000331782s
[control-plane-check] kube-scheduler is not healthy after 4m0.000711447s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000653521s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
I1009 18:44:00.191111 142849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I1009 18:44:00.639084 142849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1009 18:44:00.651955 142849 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1009 18:44:00.652011 142849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1009 18:44:00.660326 142849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1009 18:44:00.660345 142849 kubeadm.go:157] found existing configuration files:
I1009 18:44:00.660406 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1009 18:44:00.668277 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1009 18:44:00.668397 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1009 18:44:00.676109 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1009 18:44:00.684114 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1009 18:44:00.684181 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1009 18:44:00.692197 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1009 18:44:00.700419 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1009 18:44:00.700503 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1009 18:44:00.708357 142849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1009 18:44:00.716362 142849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1009 18:44:00.716452 142849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1009 18:44:00.724570 142849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1009 18:44:00.761995 142849 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1009 18:44:00.762077 142849 kubeadm.go:318] [preflight] Running pre-flight checks
I1009 18:44:00.782893 142849 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1009 18:44:00.782982 142849 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1009 18:44:00.783039 142849 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1009 18:44:00.783107 142849 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1009 18:44:00.783169 142849 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1009 18:44:00.783224 142849 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1009 18:44:00.783299 142849 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1009 18:44:00.783346 142849 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1009 18:44:00.783416 142849 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1009 18:44:00.783457 142849 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1009 18:44:00.783499 142849 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1009 18:44:00.843821 142849 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1009 18:44:00.843995 142849 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1009 18:44:00.844145 142849 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1009 18:44:00.851166 142849 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1009 18:44:00.854256 142849 out.go:252] - Generating certificates and keys ...
I1009 18:44:00.854355 142849 kubeadm.go:318] [certs] Using existing ca certificate authority
I1009 18:44:00.854455 142849 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1009 18:44:00.854580 142849 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1009 18:44:00.854675 142849 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
I1009 18:44:00.854766 142849 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
I1009 18:44:00.854847 142849 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
I1009 18:44:00.854943 142849 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
I1009 18:44:00.855044 142849 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
I1009 18:44:00.855164 142849 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1009 18:44:00.855285 142849 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1009 18:44:00.855349 142849 kubeadm.go:318] [certs] Using the existing "sa" key
I1009 18:44:00.855449 142849 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1009 18:44:01.055104 142849 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1009 18:44:01.286049 142849 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1009 18:44:01.840411 142849 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1009 18:44:01.938562 142849 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1009 18:44:02.214511 142849 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1009 18:44:02.215019 142849 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1009 18:44:02.217245 142849 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1009 18:44:02.220606 142849 out.go:252] - Booting up control plane ...
I1009 18:44:02.220730 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1009 18:44:02.220814 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1009 18:44:02.220876 142849 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1009 18:44:02.234340 142849 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1009 18:44:02.234550 142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1009 18:44:02.241242 142849 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1009 18:44:02.241467 142849 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1009 18:44:02.241561 142849 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1009 18:44:02.348245 142849 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1009 18:44:02.348415 142849 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1009 18:44:03.349141 142849 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001031751s
I1009 18:44:03.352060 142849 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1009 18:44:03.352187 142849 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1009 18:44:03.352320 142849 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1009 18:44:03.352438 142849 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1009 18:48:03.353372 142849 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
I1009 18:48:03.353526 142849 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
I1009 18:48:03.353637 142849 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
I1009 18:48:03.353649 142849 kubeadm.go:318]
I1009 18:48:03.353761 142849 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1009 18:48:03.353886 142849 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1009 18:48:03.354039 142849 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1009 18:48:03.354175 142849 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1009 18:48:03.354298 142849 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1009 18:48:03.354475 142849 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1009 18:48:03.354486 142849 kubeadm.go:318]
I1009 18:48:03.358291 142849 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1009 18:48:03.358473 142849 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1009 18:48:03.359297 142849 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
I1009 18:48:03.359418 142849 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1009 18:48:03.359556 142849 kubeadm.go:402] duration metric: took 8m8.482207871s to StartCluster
I1009 18:48:03.359811 142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1009 18:48:03.359985 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1009 18:48:03.388677 142849 cri.go:89] found id: ""
I1009 18:48:03.388721 142849 logs.go:282] 0 containers: []
W1009 18:48:03.388734 142849 logs.go:284] No container was found matching "kube-apiserver"
I1009 18:48:03.388742 142849 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1009 18:48:03.388946 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1009 18:48:03.415393 142849 cri.go:89] found id: ""
I1009 18:48:03.415428 142849 logs.go:282] 0 containers: []
W1009 18:48:03.415440 142849 logs.go:284] No container was found matching "etcd"
I1009 18:48:03.415446 142849 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1009 18:48:03.415495 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1009 18:48:03.442580 142849 cri.go:89] found id: ""
I1009 18:48:03.442605 142849 logs.go:282] 0 containers: []
W1009 18:48:03.442613 142849 logs.go:284] No container was found matching "coredns"
I1009 18:48:03.442620 142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1009 18:48:03.442670 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1009 18:48:03.470120 142849 cri.go:89] found id: ""
I1009 18:48:03.470148 142849 logs.go:282] 0 containers: []
W1009 18:48:03.470157 142849 logs.go:284] No container was found matching "kube-scheduler"
I1009 18:48:03.470164 142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1009 18:48:03.470212 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1009 18:48:03.498917 142849 cri.go:89] found id: ""
I1009 18:48:03.498947 142849 logs.go:282] 0 containers: []
W1009 18:48:03.498958 142849 logs.go:284] No container was found matching "kube-proxy"
I1009 18:48:03.498966 142849 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1009 18:48:03.499026 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1009 18:48:03.526724 142849 cri.go:89] found id: ""
I1009 18:48:03.526757 142849 logs.go:282] 0 containers: []
W1009 18:48:03.526767 142849 logs.go:284] No container was found matching "kube-controller-manager"
I1009 18:48:03.526776 142849 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1009 18:48:03.526842 142849 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1009 18:48:03.554780 142849 cri.go:89] found id: ""
I1009 18:48:03.554814 142849 logs.go:282] 0 containers: []
W1009 18:48:03.554825 142849 logs.go:284] No container was found matching "kindnet"
I1009 18:48:03.554840 142849 logs.go:123] Gathering logs for kubelet ...
I1009 18:48:03.554860 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1009 18:48:03.623582 142849 logs.go:123] Gathering logs for dmesg ...
I1009 18:48:03.623621 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1009 18:48:03.636753 142849 logs.go:123] Gathering logs for describe nodes ...
I1009 18:48:03.636783 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1009 18:48:03.702919 142849 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1009 18:48:03.693280 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.695178 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.695797 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.697455 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.697952 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1009 18:48:03.693280 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.695178 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.695797 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.697455 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1009 18:48:03.697952 2397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1009 18:48:03.702953 142849 logs.go:123] Gathering logs for CRI-O ...
I1009 18:48:03.702983 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1009 18:48:03.766952 142849 logs.go:123] Gathering logs for container status ...
I1009 18:48:03.766996 142849 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1009 18:48:03.798751 142849 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001031751s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1009 18:48:03.798836 142849 out.go:285] *
*
W1009 18:48:03.798924 142849 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001031751s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001031751s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1009 18:48:03.798944 142849 out.go:285] *
*
W1009 18:48:03.800831 142849 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1009 18:48:03.804413 142849 out.go:203]
W1009 18:48:03.805486 142849 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001031751s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001031751s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000972123s
[control-plane-check] kube-apiserver is not healthy after 4m0.001015431s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001317584s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1009 18:48:03.805512 142849 out.go:285] *
*
I1009 18:48:03.807236 142849 out.go:203]
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-139298 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (514.48s)