=== RUN TestAddons/Setup
addons_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m40.970079354s)
-- stdout --
* [addons-995790] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21409
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "addons-995790" primary control-plane node in "addons-995790" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
-- /stdout --
** stderr **
I1014 19:14:54.143098 418677 out.go:360] Setting OutFile to fd 1 ...
I1014 19:14:54.143398 418677 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:14:54.143409 418677 out.go:374] Setting ErrFile to fd 2...
I1014 19:14:54.143413 418677 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:14:54.143632 418677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:14:54.144229 418677 out.go:368] Setting JSON to false
I1014 19:14:54.145235 418677 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7040,"bootTime":1760462254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1014 19:14:54.145345 418677 start.go:141] virtualization: kvm guest
I1014 19:14:54.147363 418677 out.go:179] * [addons-995790] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1014 19:14:54.149119 418677 out.go:179] - MINIKUBE_LOCATION=21409
I1014 19:14:54.149122 418677 notify.go:220] Checking for updates...
I1014 19:14:54.150463 418677 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1014 19:14:54.152135 418677 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
I1014 19:14:54.153561 418677 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
I1014 19:14:54.154959 418677 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1014 19:14:54.156505 418677 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1014 19:14:54.158035 418677 driver.go:421] Setting default libvirt URI to qemu:///system
I1014 19:14:54.183220 418677 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1014 19:14:54.183324 418677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1014 19:14:54.245784 418677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-14 19:14:54.234834129 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1014 19:14:54.245907 418677 docker.go:318] overlay module found
I1014 19:14:54.247538 418677 out.go:179] * Using the docker driver based on user configuration
I1014 19:14:54.248661 418677 start.go:305] selected driver: docker
I1014 19:14:54.248676 418677 start.go:925] validating driver "docker" against <nil>
I1014 19:14:54.248688 418677 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1014 19:14:54.249214 418677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1014 19:14:54.311539 418677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-14 19:14:54.301353849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1014 19:14:54.311819 418677 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1014 19:14:54.312102 418677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1014 19:14:54.314062 418677 out.go:179] * Using Docker driver with root privileges
I1014 19:14:54.315525 418677 cni.go:84] Creating CNI manager for ""
I1014 19:14:54.315606 418677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1014 19:14:54.315621 418677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1014 19:14:54.315715 418677 start.go:349] cluster config:
{Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
I1014 19:14:54.317185 418677 out.go:179] * Starting "addons-995790" primary control-plane node in "addons-995790" cluster
I1014 19:14:54.318636 418677 cache.go:123] Beginning downloading kic base image for docker with crio
I1014 19:14:54.320059 418677 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1014 19:14:54.321211 418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:14:54.321257 418677 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1014 19:14:54.321267 418677 cache.go:58] Caching tarball of preloaded images
I1014 19:14:54.321325 418677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1014 19:14:54.321367 418677 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1014 19:14:54.321375 418677 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1014 19:14:54.321700 418677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json ...
I1014 19:14:54.321726 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json: {Name:mk863fd1f62ebe29846bf9c83671c965452917a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:14:54.339156 418677 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
I1014 19:14:54.339307 418677 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
I1014 19:14:54.339328 418677 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
I1014 19:14:54.339333 418677 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
I1014 19:14:54.339340 418677 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
I1014 19:14:54.339348 418677 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
I1014 19:15:07.142673 418677 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
I1014 19:15:07.142721 418677 cache.go:232] Successfully downloaded all kic artifacts
I1014 19:15:07.142784 418677 start.go:360] acquireMachinesLock for addons-995790: {Name:mk266b39183b20e3ac85090b638bd67120f36dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 19:15:07.142932 418677 start.go:364] duration metric: took 115.304µs to acquireMachinesLock for "addons-995790"
I1014 19:15:07.142971 418677 start.go:93] Provisioning new machine with config: &{Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1014 19:15:07.143044 418677 start.go:125] createHost starting for "" (driver="docker")
I1014 19:15:07.145390 418677 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1014 19:15:07.145624 418677 start.go:159] libmachine.API.Create for "addons-995790" (driver="docker")
I1014 19:15:07.145656 418677 client.go:168] LocalClient.Create starting
I1014 19:15:07.145846 418677 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
I1014 19:15:07.434905 418677 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
I1014 19:15:07.715299 418677 cli_runner.go:164] Run: docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1014 19:15:07.733452 418677 cli_runner.go:211] docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1014 19:15:07.733522 418677 network_create.go:284] running [docker network inspect addons-995790] to gather additional debugging logs...
I1014 19:15:07.733543 418677 cli_runner.go:164] Run: docker network inspect addons-995790
W1014 19:15:07.750744 418677 cli_runner.go:211] docker network inspect addons-995790 returned with exit code 1
I1014 19:15:07.750793 418677 network_create.go:287] error running [docker network inspect addons-995790]: docker network inspect addons-995790: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-995790 not found
I1014 19:15:07.750815 418677 network_create.go:289] output of [docker network inspect addons-995790]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-995790 not found
** /stderr **
I1014 19:15:07.750926 418677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 19:15:07.768616 418677 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00188ad60}
I1014 19:15:07.768676 418677 network_create.go:124] attempt to create docker network addons-995790 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1014 19:15:07.768727 418677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-995790 addons-995790
I1014 19:15:07.958947 418677 network_create.go:108] docker network addons-995790 192.168.49.0/24 created
I1014 19:15:07.959032 418677 kic.go:121] calculated static IP "192.168.49.2" for the "addons-995790" container
I1014 19:15:07.959107 418677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1014 19:15:07.977066 418677 cli_runner.go:164] Run: docker volume create addons-995790 --label name.minikube.sigs.k8s.io=addons-995790 --label created_by.minikube.sigs.k8s.io=true
I1014 19:15:08.056989 418677 oci.go:103] Successfully created a docker volume addons-995790
I1014 19:15:08.057092 418677 cli_runner.go:164] Run: docker run --rm --name addons-995790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --entrypoint /usr/bin/test -v addons-995790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1014 19:15:14.536478 418677 cli_runner.go:217] Completed: docker run --rm --name addons-995790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --entrypoint /usr/bin/test -v addons-995790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (6.479342619s)
I1014 19:15:14.536549 418677 oci.go:107] Successfully prepared a docker volume addons-995790
I1014 19:15:14.536567 418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:15:14.536595 418677 kic.go:194] Starting extracting preloaded images to volume ...
I1014 19:15:14.536653 418677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-995790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1014 19:15:18.947715 418677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-995790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.411003721s)
I1014 19:15:18.947774 418677 kic.go:203] duration metric: took 4.411159233s to extract preloaded images to volume ...
W1014 19:15:18.947868 418677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1014 19:15:18.947924 418677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1014 19:15:18.947967 418677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1014 19:15:19.004530 418677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-995790 --name addons-995790 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-995790 --network addons-995790 --ip 192.168.49.2 --volume addons-995790:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1014 19:15:19.288040 418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Running}}
I1014 19:15:19.307673 418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
I1014 19:15:19.326235 418677 cli_runner.go:164] Run: docker exec addons-995790 stat /var/lib/dpkg/alternatives/iptables
I1014 19:15:19.373676 418677 oci.go:144] the created container "addons-995790" has a running status.
I1014 19:15:19.373711 418677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa...
I1014 19:15:19.438478 418677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1014 19:15:19.467598 418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
I1014 19:15:19.487585 418677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1014 19:15:19.487617 418677 kic_runner.go:114] Args: [docker exec --privileged addons-995790 chown docker:docker /home/docker/.ssh/authorized_keys]
I1014 19:15:19.530777 418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
I1014 19:15:19.553491 418677 machine.go:93] provisionDockerMachine start ...
I1014 19:15:19.553635 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:19.573226 418677 main.go:141] libmachine: Using SSH client type: native
I1014 19:15:19.573505 418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32888 <nil> <nil>}
I1014 19:15:19.573520 418677 main.go:141] libmachine: About to run SSH command:
hostname
I1014 19:15:19.574283 418677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49188->127.0.0.1:32888: read: connection reset by peer
I1014 19:15:22.724358 418677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-995790
I1014 19:15:22.724401 418677 ubuntu.go:182] provisioning hostname "addons-995790"
I1014 19:15:22.724470 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:22.743022 418677 main.go:141] libmachine: Using SSH client type: native
I1014 19:15:22.743269 418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32888 <nil> <nil>}
I1014 19:15:22.743284 418677 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-995790 && echo "addons-995790" | sudo tee /etc/hostname
I1014 19:15:22.900512 418677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-995790
I1014 19:15:22.900585 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:22.920031 418677 main.go:141] libmachine: Using SSH client type: native
I1014 19:15:22.920276 418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32888 <nil> <nil>}
I1014 19:15:22.920295 418677 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-995790' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-995790/g' /etc/hosts;
else
echo '127.0.1.1 addons-995790' | sudo tee -a /etc/hosts;
fi
fi
I1014 19:15:23.068004 418677 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1014 19:15:23.068050 418677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
I1014 19:15:23.068081 418677 ubuntu.go:190] setting up certificates
I1014 19:15:23.068102 418677 provision.go:84] configureAuth start
I1014 19:15:23.068156 418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
I1014 19:15:23.086311 418677 provision.go:143] copyHostCerts
I1014 19:15:23.086414 418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
I1014 19:15:23.086563 418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
I1014 19:15:23.086676 418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
I1014 19:15:23.086801 418677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.addons-995790 san=[127.0.0.1 192.168.49.2 addons-995790 localhost minikube]
I1014 19:15:23.273431 418677 provision.go:177] copyRemoteCerts
I1014 19:15:23.273511 418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1014 19:15:23.273574 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:23.291916 418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
I1014 19:15:23.396479 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1014 19:15:23.416691 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1014 19:15:23.434262 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1014 19:15:23.453288 418677 provision.go:87] duration metric: took 385.170243ms to configureAuth
I1014 19:15:23.453319 418677 ubuntu.go:206] setting minikube options for container-runtime
I1014 19:15:23.453535 418677 config.go:182] Loaded profile config "addons-995790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:15:23.453680 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:23.471914 418677 main.go:141] libmachine: Using SSH client type: native
I1014 19:15:23.472137 418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32888 <nil> <nil>}
I1014 19:15:23.472152 418677 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1014 19:15:23.733237 418677 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1014 19:15:23.733270 418677 machine.go:96] duration metric: took 4.179754524s to provisionDockerMachine
I1014 19:15:23.733282 418677 client.go:171] duration metric: took 16.587618271s to LocalClient.Create
I1014 19:15:23.733305 418677 start.go:167] duration metric: took 16.587684582s to libmachine.API.Create "addons-995790"
I1014 19:15:23.733316 418677 start.go:293] postStartSetup for "addons-995790" (driver="docker")
I1014 19:15:23.733327 418677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1014 19:15:23.733380 418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1014 19:15:23.733412 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:23.751965 418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
I1014 19:15:23.859846 418677 ssh_runner.go:195] Run: cat /etc/os-release
I1014 19:15:23.863838 418677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1014 19:15:23.863870 418677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1014 19:15:23.863883 418677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
I1014 19:15:23.863992 418677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
I1014 19:15:23.864025 418677 start.go:296] duration metric: took 130.703561ms for postStartSetup
I1014 19:15:23.864349 418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
I1014 19:15:23.883188 418677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json ...
I1014 19:15:23.883467 418677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1014 19:15:23.883511 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:23.901674 418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
I1014 19:15:24.004076 418677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1014 19:15:24.009323 418677 start.go:128] duration metric: took 16.866258262s to createHost
I1014 19:15:24.009355 418677 start.go:83] releasing machines lock for "addons-995790", held for 16.866403979s
I1014 19:15:24.009448 418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
I1014 19:15:24.027603 418677 ssh_runner.go:195] Run: cat /version.json
I1014 19:15:24.027655 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:24.027682 418677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1014 19:15:24.027749 418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
I1014 19:15:24.047018 418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
I1014 19:15:24.047980 418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
I1014 19:15:24.147333 418677 ssh_runner.go:195] Run: systemctl --version
I1014 19:15:24.203010 418677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1014 19:15:24.239265 418677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1014 19:15:24.244247 418677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1014 19:15:24.244326 418677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1014 19:15:24.271213 418677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1014 19:15:24.271241 418677 start.go:495] detecting cgroup driver to use...
I1014 19:15:24.271283 418677 detect.go:190] detected "systemd" cgroup driver on host os
I1014 19:15:24.271338 418677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1014 19:15:24.288582 418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1014 19:15:24.302136 418677 docker.go:218] disabling cri-docker service (if available) ...
I1014 19:15:24.302202 418677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1014 19:15:24.319309 418677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1014 19:15:24.338258 418677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1014 19:15:24.421166 418677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1014 19:15:24.507344 418677 docker.go:234] disabling docker service ...
I1014 19:15:24.507413 418677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1014 19:15:24.527160 418677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1014 19:15:24.540998 418677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1014 19:15:24.619915 418677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1014 19:15:24.702381 418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1014 19:15:24.715637 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1014 19:15:24.730967 418677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1014 19:15:24.731041 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.741751 418677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
I1014 19:15:24.741850 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.751327 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.760660 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.769496 418677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1014 19:15:24.778235 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.787210 418677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.800836 418677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1014 19:15:24.809821 418677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1014 19:15:24.818502 418677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1014 19:15:24.826249 418677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1014 19:15:24.908825 418677 ssh_runner.go:195] Run: sudo systemctl restart crio
I1014 19:15:25.018435 418677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1014 19:15:25.018512 418677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1014 19:15:25.022771 418677 start.go:563] Will wait 60s for crictl version
I1014 19:15:25.022829 418677 ssh_runner.go:195] Run: which crictl
I1014 19:15:25.026593 418677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1014 19:15:25.051748 418677 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.34.1
RuntimeApiVersion: v1
I1014 19:15:25.051887 418677 ssh_runner.go:195] Run: crio --version
I1014 19:15:25.082124 418677 ssh_runner.go:195] Run: crio --version
I1014 19:15:25.114819 418677 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1014 19:15:25.116051 418677 cli_runner.go:164] Run: docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 19:15:25.133238 418677 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1014 19:15:25.137615 418677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1014 19:15:25.149084 418677 kubeadm.go:883] updating cluster {Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1014 19:15:25.149215 418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:15:25.149264 418677 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:15:25.183200 418677 crio.go:514] all images are preloaded for cri-o runtime.
I1014 19:15:25.183223 418677 crio.go:433] Images already preloaded, skipping extraction
I1014 19:15:25.183270 418677 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:15:25.211224 418677 crio.go:514] all images are preloaded for cri-o runtime.
I1014 19:15:25.211248 418677 cache_images.go:85] Images are preloaded, skipping loading
I1014 19:15:25.211257 418677 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
I1014 19:15:25.211378 418677 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-995790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1014 19:15:25.211465 418677 ssh_runner.go:195] Run: crio config
I1014 19:15:25.258842 418677 cni.go:84] Creating CNI manager for ""
I1014 19:15:25.258862 418677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1014 19:15:25.258884 418677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1014 19:15:25.258909 418677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-995790 NodeName:addons-995790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1014 19:15:25.259030 418677 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-995790"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1014 19:15:25.259096 418677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1014 19:15:25.268016 418677 binaries.go:44] Found k8s binaries, skipping transfer
I1014 19:15:25.268081 418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1014 19:15:25.276455 418677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1014 19:15:25.289861 418677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1014 19:15:25.306253 418677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
I1014 19:15:25.319395 418677 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1014 19:15:25.323228 418677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1014 19:15:25.334293 418677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1014 19:15:25.410090 418677 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1014 19:15:25.436485 418677 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790 for IP: 192.168.49.2
I1014 19:15:25.436515 418677 certs.go:195] generating shared ca certs ...
I1014 19:15:25.436536 418677 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:25.436737 418677 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
I1014 19:15:25.557889 418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt ...
I1014 19:15:25.557928 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt: {Name:mk2101298a47cdfc6a7535a5a89a43f86399641b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:25.558191 418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key ...
I1014 19:15:25.558212 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key: {Name:mk72a468f76fb8f554fa7e2da729b4a33b35df52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:25.558339 418677 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
I1014 19:15:25.780710 418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt ...
I1014 19:15:25.780744 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt: {Name:mk8a4f460d1d6423585fbeb378daff541f57ef46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:25.780971 418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key ...
I1014 19:15:25.780996 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key: {Name:mka86b0277830100ef51b2ba9ab1ab8b3c14e1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:25.781119 418677 certs.go:257] generating profile certs ...
I1014 19:15:25.781181 418677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key
I1014 19:15:25.781197 418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt with IP's: []
I1014 19:15:26.021360 418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt ...
I1014 19:15:26.021395 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt: {Name:mk568c3fb2b3ce7a619e65c16b9ccc7357b1de34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.022262 418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key ...
I1014 19:15:26.022285 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key: {Name:mkcbe05c68b1abcbf73dde4475efe992aa01dcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.022399 418677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf
I1014 19:15:26.022431 418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1014 19:15:26.181095 418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf ...
I1014 19:15:26.181132 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf: {Name:mk54a281e3240cd2ed152e6d5b8c0ca21fb3ed96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.181331 418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf ...
I1014 19:15:26.181350 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf: {Name:mkd2e457521989ab0cbe1fce8d998e1b7682489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.181476 418677 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt
I1014 19:15:26.181568 418677 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key
I1014 19:15:26.181618 418677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key
I1014 19:15:26.181644 418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt with IP's: []
I1014 19:15:26.305564 418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt ...
I1014 19:15:26.305595 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt: {Name:mk0d3ce801fbf796b1b253618701baf984224cd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.305779 418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key ...
I1014 19:15:26.305799 418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key: {Name:mk971f001fe582ee61df229dd9241d3ce1e12713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:15:26.306684 418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
I1014 19:15:26.306726 418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
I1014 19:15:26.306747 418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
I1014 19:15:26.306804 418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
I1014 19:15:26.308080 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1014 19:15:26.328120 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1014 19:15:26.346466 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1014 19:15:26.364877 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1014 19:15:26.382948 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1014 19:15:26.400784 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1014 19:15:26.418956 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1014 19:15:26.437111 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1014 19:15:26.454998 418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1014 19:15:26.476035 418677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1014 19:15:26.489293 418677 ssh_runner.go:195] Run: openssl version
I1014 19:15:26.496090 418677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1014 19:15:26.508062 418677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1014 19:15:26.512159 418677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
I1014 19:15:26.512225 418677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1014 19:15:26.546451 418677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1014 19:15:26.555518 418677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1014 19:15:26.559800 418677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1014 19:15:26.559873 418677 kubeadm.go:400] StartCluster: {Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1014 19:15:26.559972 418677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1014 19:15:26.560030 418677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1014 19:15:26.588800 418677 cri.go:89] found id: ""
I1014 19:15:26.588892 418677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1014 19:15:26.597437 418677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1014 19:15:26.605988 418677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1014 19:15:26.606048 418677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1014 19:15:26.613996 418677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1014 19:15:26.614018 418677 kubeadm.go:157] found existing configuration files:
I1014 19:15:26.614062 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1014 19:15:26.622005 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1014 19:15:26.622055 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1014 19:15:26.629686 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1014 19:15:26.637534 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1014 19:15:26.637595 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1014 19:15:26.645355 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1014 19:15:26.653337 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1014 19:15:26.653398 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1014 19:15:26.661244 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1014 19:15:26.669176 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1014 19:15:26.669240 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1014 19:15:26.677064 418677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1014 19:15:26.736796 418677 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1014 19:15:26.798564 418677 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1014 19:19:30.687033 418677 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1014 19:19:30.687284 418677 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1014 19:19:30.690377 418677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1014 19:19:30.690500 418677 kubeadm.go:318] [preflight] Running pre-flight checks
I1014 19:19:30.690689 418677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1014 19:19:30.690818 418677 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1014 19:19:30.690897 418677 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1014 19:19:30.690990 418677 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1014 19:19:30.691065 418677 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1014 19:19:30.691137 418677 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1014 19:19:30.691214 418677 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1014 19:19:30.691289 418677 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1014 19:19:30.691377 418677 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1014 19:19:30.691469 418677 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1014 19:19:30.691539 418677 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1014 19:19:30.691632 418677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1014 19:19:30.691778 418677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1014 19:19:30.691906 418677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1014 19:19:30.691986 418677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1014 19:19:30.694820 418677 out.go:252] - Generating certificates and keys ...
I1014 19:19:30.694984 418677 kubeadm.go:318] [certs] Using existing ca certificate authority
I1014 19:19:30.695092 418677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1014 19:19:30.695205 418677 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1014 19:19:30.695277 418677 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1014 19:19:30.695362 418677 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1014 19:19:30.695410 418677 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1014 19:19:30.695458 418677 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1014 19:19:30.695553 418677 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1014 19:19:30.695598 418677 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1014 19:19:30.695699 418677 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1014 19:19:30.695811 418677 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1014 19:19:30.695884 418677 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1014 19:19:30.695938 418677 kubeadm.go:318] [certs] Generating "sa" key and public key
I1014 19:19:30.695989 418677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1014 19:19:30.696030 418677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1014 19:19:30.696076 418677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1014 19:19:30.696124 418677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1014 19:19:30.696201 418677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1014 19:19:30.696257 418677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1014 19:19:30.696331 418677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1014 19:19:30.696394 418677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1014 19:19:30.697912 418677 out.go:252] - Booting up control plane ...
I1014 19:19:30.697993 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1014 19:19:30.698059 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1014 19:19:30.698120 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1014 19:19:30.698220 418677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1014 19:19:30.698305 418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1014 19:19:30.698402 418677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1014 19:19:30.698480 418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1014 19:19:30.698517 418677 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1014 19:19:30.698618 418677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1014 19:19:30.698709 418677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1014 19:19:30.698781 418677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001912654s
I1014 19:19:30.698881 418677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1014 19:19:30.698979 418677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1014 19:19:30.699082 418677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1014 19:19:30.699155 418677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1014 19:19:30.699229 418677 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
I1014 19:19:30.699297 418677 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
I1014 19:19:30.699362 418677 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
I1014 19:19:30.699368 418677 kubeadm.go:318]
I1014 19:19:30.699455 418677 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1014 19:19:30.699535 418677 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1014 19:19:30.699614 418677 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1014 19:19:30.699700 418677 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1014 19:19:30.699776 418677 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1014 19:19:30.699855 418677 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1014 19:19:30.699900 418677 kubeadm.go:318]
W1014 19:19:30.700035 418677 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001912654s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
[control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
[control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001912654s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
[control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
[control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
I1014 19:19:30.700109 418677 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I1014 19:19:31.147300 418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1014 19:19:31.161333 418677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1014 19:19:31.161393 418677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1014 19:19:31.170157 418677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1014 19:19:31.170181 418677 kubeadm.go:157] found existing configuration files:
I1014 19:19:31.170230 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1014 19:19:31.179182 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1014 19:19:31.179253 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1014 19:19:31.187857 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1014 19:19:31.195954 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1014 19:19:31.196015 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1014 19:19:31.203851 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1014 19:19:31.211661 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1014 19:19:31.211707 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1014 19:19:31.219224 418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1014 19:19:31.226946 418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1014 19:19:31.227003 418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1014 19:19:31.234369 418677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1014 19:19:31.293676 418677 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1014 19:19:31.354438 418677 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1014 19:23:34.602125 418677 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1014 19:23:34.602353 418677 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1014 19:23:34.605314 418677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1014 19:23:34.605379 418677 kubeadm.go:318] [preflight] Running pre-flight checks
I1014 19:23:34.605471 418677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1014 19:23:34.605518 418677 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1014 19:23:34.605554 418677 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1014 19:23:34.605600 418677 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1014 19:23:34.605681 418677 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1014 19:23:34.605772 418677 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1014 19:23:34.605839 418677 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1014 19:23:34.605917 418677 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1014 19:23:34.605985 418677 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1014 19:23:34.606054 418677 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1014 19:23:34.606113 418677 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1014 19:23:34.606211 418677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1014 19:23:34.606370 418677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1014 19:23:34.606519 418677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1014 19:23:34.606591 418677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1014 19:23:34.610561 418677 out.go:252] - Generating certificates and keys ...
I1014 19:23:34.610641 418677 kubeadm.go:318] [certs] Using existing ca certificate authority
I1014 19:23:34.610706 418677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1014 19:23:34.610793 418677 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1014 19:23:34.610868 418677 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
I1014 19:23:34.610930 418677 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
I1014 19:23:34.610989 418677 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
I1014 19:23:34.611057 418677 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
I1014 19:23:34.611108 418677 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
I1014 19:23:34.611171 418677 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1014 19:23:34.611229 418677 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1014 19:23:34.611260 418677 kubeadm.go:318] [certs] Using the existing "sa" key
I1014 19:23:34.611331 418677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1014 19:23:34.611417 418677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1014 19:23:34.611502 418677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1014 19:23:34.611575 418677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1014 19:23:34.611691 418677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1014 19:23:34.611796 418677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1014 19:23:34.611881 418677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1014 19:23:34.611938 418677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1014 19:23:34.615834 418677 out.go:252] - Booting up control plane ...
I1014 19:23:34.615927 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1014 19:23:34.615999 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1014 19:23:34.616053 418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1014 19:23:34.616141 418677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1014 19:23:34.616223 418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1014 19:23:34.616305 418677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1014 19:23:34.616375 418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1014 19:23:34.616410 418677 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1014 19:23:34.616578 418677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1014 19:23:34.616723 418677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1014 19:23:34.616787 418677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501329137s
I1014 19:23:34.616886 418677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1014 19:23:34.616971 418677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1014 19:23:34.617055 418677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1014 19:23:34.617127 418677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1014 19:23:34.617197 418677 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
I1014 19:23:34.617269 418677 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
I1014 19:23:34.617331 418677 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
I1014 19:23:34.617340 418677 kubeadm.go:318]
I1014 19:23:34.617424 418677 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1014 19:23:34.617498 418677 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1014 19:23:34.617568 418677 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1014 19:23:34.617642 418677 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1014 19:23:34.617710 418677 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1014 19:23:34.617795 418677 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1014 19:23:34.617838 418677 kubeadm.go:318]
I1014 19:23:34.617883 418677 kubeadm.go:402] duration metric: took 8m8.058016144s to StartCluster
I1014 19:23:34.617950 418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1014 19:23:34.618023 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1014 19:23:34.647116 418677 cri.go:89] found id: ""
I1014 19:23:34.647160 418677 logs.go:282] 0 containers: []
W1014 19:23:34.647172 418677 logs.go:284] No container was found matching "kube-apiserver"
I1014 19:23:34.647182 418677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1014 19:23:34.647255 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1014 19:23:34.673925 418677 cri.go:89] found id: ""
I1014 19:23:34.673951 418677 logs.go:282] 0 containers: []
W1014 19:23:34.673960 418677 logs.go:284] No container was found matching "etcd"
I1014 19:23:34.673966 418677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1014 19:23:34.674025 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1014 19:23:34.701391 418677 cri.go:89] found id: ""
I1014 19:23:34.701417 418677 logs.go:282] 0 containers: []
W1014 19:23:34.701425 418677 logs.go:284] No container was found matching "coredns"
I1014 19:23:34.701430 418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1014 19:23:34.701502 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1014 19:23:34.728362 418677 cri.go:89] found id: ""
I1014 19:23:34.728388 418677 logs.go:282] 0 containers: []
W1014 19:23:34.728397 418677 logs.go:284] No container was found matching "kube-scheduler"
I1014 19:23:34.728403 418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1014 19:23:34.728453 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1014 19:23:34.755212 418677 cri.go:89] found id: ""
I1014 19:23:34.755236 418677 logs.go:282] 0 containers: []
W1014 19:23:34.755243 418677 logs.go:284] No container was found matching "kube-proxy"
I1014 19:23:34.755249 418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1014 19:23:34.755300 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1014 19:23:34.781082 418677 cri.go:89] found id: ""
I1014 19:23:34.781105 418677 logs.go:282] 0 containers: []
W1014 19:23:34.781113 418677 logs.go:284] No container was found matching "kube-controller-manager"
I1014 19:23:34.781119 418677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1014 19:23:34.781165 418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1014 19:23:34.809238 418677 cri.go:89] found id: ""
I1014 19:23:34.809262 418677 logs.go:282] 0 containers: []
W1014 19:23:34.809272 418677 logs.go:284] No container was found matching "kindnet"
I1014 19:23:34.809287 418677 logs.go:123] Gathering logs for CRI-O ...
I1014 19:23:34.809305 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1014 19:23:34.873736 418677 logs.go:123] Gathering logs for container status ...
I1014 19:23:34.873796 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1014 19:23:34.904538 418677 logs.go:123] Gathering logs for kubelet ...
I1014 19:23:34.904566 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1014 19:23:34.968544 418677 logs.go:123] Gathering logs for dmesg ...
I1014 19:23:34.968582 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1014 19:23:34.986486 418677 logs.go:123] Gathering logs for describe nodes ...
I1014 19:23:34.986518 418677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1014 19:23:35.047524 418677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1014 19:23:35.039994 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.040511 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.042125 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.042584 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.044164 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1014 19:23:35.039994 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.040511 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.042125 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.042584 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1014 19:23:35.044164 2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1014 19:23:35.047550 418677 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501329137s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1014 19:23:35.047601 418677 out.go:285] *
*
W1014 19:23:35.047719 418677 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501329137s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501329137s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1014 19:23:35.047737 418677 out.go:285] *
*
W1014 19:23:35.049388 418677 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1014 19:23:35.056001 418677 out.go:203]
W1014 19:23:35.057592 418677 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501329137s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501329137s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1014 19:23:35.057651 418677 out.go:285] *
*
I1014 19:23:35.060157 418677 out.go:203]
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (521.01s)