=== RUN TestAddons/Setup
addons_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m34.760720029s)
-- stdout --
* [addons-436069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21682
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "addons-436069" primary control-plane node in "addons-436069" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
-- /stdout --
** stderr **
I1002 20:22:58.727245 85408 out.go:360] Setting OutFile to fd 1 ...
I1002 20:22:58.727479 85408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:22:58.727487 85408 out.go:374] Setting ErrFile to fd 2...
I1002 20:22:58.727491 85408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:22:58.727706 85408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 20:22:58.728253 85408 out.go:368] Setting JSON to false
I1002 20:22:58.729116 85408 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":7520,"bootTime":1759429059,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1002 20:22:58.729197 85408 start.go:140] virtualization: kvm guest
I1002 20:22:58.731395 85408 out.go:179] * [addons-436069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1002 20:22:58.732841 85408 out.go:179] - MINIKUBE_LOCATION=21682
I1002 20:22:58.732837 85408 notify.go:220] Checking for updates...
I1002 20:22:58.734271 85408 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 20:22:58.735582 85408 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
I1002 20:22:58.736810 85408 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
I1002 20:22:58.738005 85408 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1002 20:22:58.739275 85408 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 20:22:58.741006 85408 driver.go:421] Setting default libvirt URI to qemu:///system
I1002 20:22:58.764171 85408 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
I1002 20:22:58.764350 85408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:22:58.819134 85408 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-02 20:22:58.809433985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 20:22:58.819241 85408 docker.go:318] overlay module found
I1002 20:22:58.821699 85408 out.go:179] * Using the docker driver based on user configuration
I1002 20:22:58.823158 85408 start.go:304] selected driver: docker
I1002 20:22:58.823179 85408 start.go:924] validating driver "docker" against <nil>
I1002 20:22:58.823193 85408 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 20:22:58.823929 85408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:22:58.880114 85408 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-02 20:22:58.869500674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 20:22:58.880257 85408 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1002 20:22:58.880471 85408 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1002 20:22:58.882165 85408 out.go:179] * Using Docker driver with root privileges
I1002 20:22:58.883464 85408 cni.go:84] Creating CNI manager for ""
I1002 20:22:58.883542 85408 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 20:22:58.883560 85408 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1002 20:22:58.883630 85408 start.go:348] cluster config:
{Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
I1002 20:22:58.885023 85408 out.go:179] * Starting "addons-436069" primary control-plane node in "addons-436069" cluster
I1002 20:22:58.886283 85408 cache.go:123] Beginning downloading kic base image for docker with crio
I1002 20:22:58.887595 85408 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
I1002 20:22:58.888981 85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:22:58.889020 85408 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1002 20:22:58.889028 85408 cache.go:58] Caching tarball of preloaded images
I1002 20:22:58.889023 85408 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
I1002 20:22:58.889116 85408 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1002 20:22:58.889127 85408 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1002 20:22:58.889483 85408 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json ...
I1002 20:22:58.889508 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json: {Name:mk39d759042797b89bb2baad365f87f5edd91ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:22:58.904981 85408 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
I1002 20:22:58.905152 85408 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
I1002 20:22:58.905174 85408 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
I1002 20:22:58.905180 85408 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
I1002 20:22:58.905193 85408 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
I1002 20:22:58.905201 85408 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
I1002 20:23:11.272069 85408 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
I1002 20:23:11.272109 85408 cache.go:232] Successfully downloaded all kic artifacts
I1002 20:23:11.272142 85408 start.go:360] acquireMachinesLock for addons-436069: {Name:mkc1c80a9dbdd8675adf7a837ad4b78f6dc0cbce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 20:23:11.272253 85408 start.go:364] duration metric: took 89.67µs to acquireMachinesLock for "addons-436069"
I1002 20:23:11.272280 85408 start.go:93] Provisioning new machine with config: &{Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1002 20:23:11.272359 85408 start.go:125] createHost starting for "" (driver="docker")
I1002 20:23:11.274246 85408 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1002 20:23:11.274530 85408 start.go:159] libmachine.API.Create for "addons-436069" (driver="docker")
I1002 20:23:11.274573 85408 client.go:168] LocalClient.Create starting
I1002 20:23:11.274689 85408 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
I1002 20:23:11.556590 85408 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
I1002 20:23:11.597341 85408 cli_runner.go:164] Run: docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 20:23:11.614466 85408 cli_runner.go:211] docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 20:23:11.614529 85408 network_create.go:284] running [docker network inspect addons-436069] to gather additional debugging logs...
I1002 20:23:11.614548 85408 cli_runner.go:164] Run: docker network inspect addons-436069
W1002 20:23:11.630619 85408 cli_runner.go:211] docker network inspect addons-436069 returned with exit code 1
I1002 20:23:11.630648 85408 network_create.go:287] error running [docker network inspect addons-436069]: docker network inspect addons-436069: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-436069 not found
I1002 20:23:11.630668 85408 network_create.go:289] output of [docker network inspect addons-436069]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-436069 not found
** /stderr **
I1002 20:23:11.630831 85408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:23:11.647916 85408 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002100250}
I1002 20:23:11.647963 85408 network_create.go:124] attempt to create docker network addons-436069 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 20:23:11.648026 85408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-436069 addons-436069
I1002 20:23:11.707394 85408 network_create.go:108] docker network addons-436069 192.168.49.0/24 created
I1002 20:23:11.707423 85408 kic.go:121] calculated static IP "192.168.49.2" for the "addons-436069" container
I1002 20:23:11.707496 85408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1002 20:23:11.724899 85408 cli_runner.go:164] Run: docker volume create addons-436069 --label name.minikube.sigs.k8s.io=addons-436069 --label created_by.minikube.sigs.k8s.io=true
I1002 20:23:11.742535 85408 oci.go:103] Successfully created a docker volume addons-436069
I1002 20:23:11.742630 85408 cli_runner.go:164] Run: docker run --rm --name addons-436069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --entrypoint /usr/bin/test -v addons-436069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
I1002 20:23:17.296454 85408 cli_runner.go:217] Completed: docker run --rm --name addons-436069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --entrypoint /usr/bin/test -v addons-436069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (5.553783939s)
I1002 20:23:17.296487 85408 oci.go:107] Successfully prepared a docker volume addons-436069
I1002 20:23:17.296518 85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:23:17.296538 85408 kic.go:194] Starting extracting preloaded images to volume ...
I1002 20:23:17.296615 85408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-436069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
I1002 20:23:21.673521 85408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-436069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.376847102s)
I1002 20:23:21.673561 85408 kic.go:203] duration metric: took 4.377018781s to extract preloaded images to volume ...
W1002 20:23:21.673657 85408 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1002 20:23:21.673708 85408 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1002 20:23:21.673775 85408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1002 20:23:21.727782 85408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-436069 --name addons-436069 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-436069 --network addons-436069 --ip 192.168.49.2 --volume addons-436069:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
I1002 20:23:22.013927 85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Running}}
I1002 20:23:22.032780 85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
I1002 20:23:22.049946 85408 cli_runner.go:164] Run: docker exec addons-436069 stat /var/lib/dpkg/alternatives/iptables
I1002 20:23:22.101205 85408 oci.go:144] the created container "addons-436069" has a running status.
I1002 20:23:22.101238 85408 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa...
I1002 20:23:22.435698 85408 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1002 20:23:22.461661 85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
I1002 20:23:22.480195 85408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1002 20:23:22.480232 85408 kic_runner.go:114] Args: [docker exec --privileged addons-436069 chown docker:docker /home/docker/.ssh/authorized_keys]
I1002 20:23:22.524523 85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
I1002 20:23:22.542631 85408 machine.go:93] provisionDockerMachine start ...
I1002 20:23:22.542773 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:22.560346 85408 main.go:141] libmachine: Using SSH client type: native
I1002 20:23:22.560659 85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 20:23:22.560678 85408 main.go:141] libmachine: About to run SSH command:
hostname
I1002 20:23:22.705732 85408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436069
I1002 20:23:22.705776 85408 ubuntu.go:182] provisioning hostname "addons-436069"
I1002 20:23:22.705839 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:22.724077 85408 main.go:141] libmachine: Using SSH client type: native
I1002 20:23:22.724303 85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 20:23:22.724317 85408 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-436069 && echo "addons-436069" | sudo tee /etc/hostname
I1002 20:23:22.876077 85408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436069
I1002 20:23:22.876192 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:22.893376 85408 main.go:141] libmachine: Using SSH client type: native
I1002 20:23:22.893583 85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 20:23:22.893599 85408 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-436069' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-436069/g' /etc/hosts;
else
echo '127.0.1.1 addons-436069' | sudo tee -a /etc/hosts;
fi
fi
I1002 20:23:23.036537 85408 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1002 20:23:23.036568 85408 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
I1002 20:23:23.036610 85408 ubuntu.go:190] setting up certificates
I1002 20:23:23.036624 85408 provision.go:84] configureAuth start
I1002 20:23:23.036678 85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
I1002 20:23:23.054068 85408 provision.go:143] copyHostCerts
I1002 20:23:23.054147 85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
I1002 20:23:23.054264 85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
I1002 20:23:23.054333 85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
I1002 20:23:23.054386 85408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.addons-436069 san=[127.0.0.1 192.168.49.2 addons-436069 localhost minikube]
I1002 20:23:23.161577 85408 provision.go:177] copyRemoteCerts
I1002 20:23:23.161637 85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1002 20:23:23.161692 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.178947 85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
I1002 20:23:23.281158 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1002 20:23:23.300413 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1002 20:23:23.318382 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1002 20:23:23.335806 85408 provision.go:87] duration metric: took 299.164062ms to configureAuth
I1002 20:23:23.335838 85408 ubuntu.go:206] setting minikube options for container-runtime
I1002 20:23:23.336072 85408 config.go:182] Loaded profile config "addons-436069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:23:23.336218 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.354665 85408 main.go:141] libmachine: Using SSH client type: native
I1002 20:23:23.354899 85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 20:23:23.354918 85408 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1002 20:23:23.608355 85408 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1002 20:23:23.608382 85408 machine.go:96] duration metric: took 1.065718242s to provisionDockerMachine
I1002 20:23:23.608395 85408 client.go:171] duration metric: took 12.333810073s to LocalClient.Create
I1002 20:23:23.608420 85408 start.go:167] duration metric: took 12.333890414s to libmachine.API.Create "addons-436069"
I1002 20:23:23.608429 85408 start.go:293] postStartSetup for "addons-436069" (driver="docker")
I1002 20:23:23.608442 85408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1002 20:23:23.608511 85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1002 20:23:23.608586 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.625979 85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
I1002 20:23:23.729771 85408 ssh_runner.go:195] Run: cat /etc/os-release
I1002 20:23:23.733425 85408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1002 20:23:23.733453 85408 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1002 20:23:23.733465 85408 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
I1002 20:23:23.733527 85408 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
I1002 20:23:23.733550 85408 start.go:296] duration metric: took 125.115167ms for postStartSetup
I1002 20:23:23.733855 85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
I1002 20:23:23.750954 85408 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json ...
I1002 20:23:23.751262 85408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1002 20:23:23.751306 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.768203 85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
I1002 20:23:23.866973 85408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1002 20:23:23.871193 85408 start.go:128] duration metric: took 12.598818239s to createHost
I1002 20:23:23.871221 85408 start.go:83] releasing machines lock for "addons-436069", held for 12.598953112s
I1002 20:23:23.871287 85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
I1002 20:23:23.888209 85408 ssh_runner.go:195] Run: cat /version.json
I1002 20:23:23.888261 85408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1002 20:23:23.888268 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.888313 85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
I1002 20:23:23.906522 85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
I1002 20:23:23.908205 85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
I1002 20:23:24.074363 85408 ssh_runner.go:195] Run: systemctl --version
I1002 20:23:24.081162 85408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1002 20:23:24.114923 85408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1002 20:23:24.119623 85408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1002 20:23:24.119680 85408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1002 20:23:24.145084 85408 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1002 20:23:24.145112 85408 start.go:495] detecting cgroup driver to use...
I1002 20:23:24.145141 85408 detect.go:190] detected "systemd" cgroup driver on host os
I1002 20:23:24.145182 85408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1002 20:23:24.160550 85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1002 20:23:24.172014 85408 docker.go:218] disabling cri-docker service (if available) ...
I1002 20:23:24.172060 85408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1002 20:23:24.187602 85408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1002 20:23:24.205911 85408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1002 20:23:24.284295 85408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1002 20:23:24.371200 85408 docker.go:234] disabling docker service ...
I1002 20:23:24.371277 85408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1002 20:23:24.390275 85408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1002 20:23:24.403276 85408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1002 20:23:24.483636 85408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1002 20:23:24.563979 85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1002 20:23:24.575865 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1002 20:23:24.589545 85408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1002 20:23:24.589605 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.599592 85408 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
I1002 20:23:24.599651 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.608095 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.617139 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.625989 85408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1002 20:23:24.633987 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.642324 85408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.655053 85408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 20:23:24.663473 85408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1002 20:23:24.670697 85408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1002 20:23:24.670838 85408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1002 20:23:24.683363 85408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1002 20:23:24.690858 85408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 20:23:24.772848 85408 ssh_runner.go:195] Run: sudo systemctl restart crio
I1002 20:23:24.871021 85408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1002 20:23:24.871110 85408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1002 20:23:24.875176 85408 start.go:563] Will wait 60s for crictl version
I1002 20:23:24.875242 85408 ssh_runner.go:195] Run: which crictl
I1002 20:23:24.878893 85408 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1002 20:23:24.902718 85408 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.34.1
RuntimeApiVersion: v1
I1002 20:23:24.902826 85408 ssh_runner.go:195] Run: crio --version
I1002 20:23:24.929536 85408 ssh_runner.go:195] Run: crio --version
I1002 20:23:24.958032 85408 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1002 20:23:24.959113 85408 cli_runner.go:164] Run: docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:23:24.975765 85408 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1002 20:23:24.980097 85408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 20:23:24.990391 85408 kubeadm.go:883] updating cluster {Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1002 20:23:24.990527 85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:23:24.990580 85408 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:23:25.024438 85408 crio.go:514] all images are preloaded for cri-o runtime.
I1002 20:23:25.024481 85408 crio.go:433] Images already preloaded, skipping extraction
I1002 20:23:25.024539 85408 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:23:25.049104 85408 crio.go:514] all images are preloaded for cri-o runtime.
I1002 20:23:25.049125 85408 cache_images.go:85] Images are preloaded, skipping loading
I1002 20:23:25.049133 85408 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
I1002 20:23:25.049210 85408 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-436069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1002 20:23:25.049266 85408 ssh_runner.go:195] Run: crio config
I1002 20:23:25.094609 85408 cni.go:84] Creating CNI manager for ""
I1002 20:23:25.094640 85408 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 20:23:25.094661 85408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1002 20:23:25.094681 85408 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-436069 NodeName:addons-436069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1002 20:23:25.094835 85408 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-436069"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1002 20:23:25.094903 85408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1002 20:23:25.102927 85408 binaries.go:44] Found k8s binaries, skipping transfer
I1002 20:23:25.103000 85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1002 20:23:25.110274 85408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1002 20:23:25.122287 85408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1002 20:23:25.137390 85408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
I1002 20:23:25.149451 85408 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1002 20:23:25.153030 85408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 20:23:25.162415 85408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 20:23:25.240043 85408 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1002 20:23:25.271306 85408 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069 for IP: 192.168.49.2
I1002 20:23:25.271331 85408 certs.go:195] generating shared ca certs ...
I1002 20:23:25.271352 85408 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.271502 85408 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
I1002 20:23:25.420752 85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt ...
I1002 20:23:25.420782 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt: {Name:mkc601f6be1d2302a94e692bc2d9ae2acda9800b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.420967 85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key ...
I1002 20:23:25.420979 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key: {Name:mkef1bbc5960baece2e5e5207bc7cd1f9d83225b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.421057 85408 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
I1002 20:23:25.734778 85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt ...
I1002 20:23:25.734807 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt: {Name:mk909181c3a57ff65c6125df90f7a6ad13c2c87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.734977 85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key ...
I1002 20:23:25.734989 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key: {Name:mk54dbf10beaad6229e3a5278806b34b0e358f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.735071 85408 certs.go:257] generating profile certs ...
I1002 20:23:25.735126 85408 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key
I1002 20:23:25.735140 85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt with IP's: []
I1002 20:23:25.758012 85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt ...
I1002 20:23:25.758032 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt: {Name:mk5cfbb52b8d031396930e7bff64e6ce2c5aecc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.758166 85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key ...
I1002 20:23:25.758176 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key: {Name:mk7a3d8b24057fb4566bd07837c73eb7ac234a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.758247 85408 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8
I1002 20:23:25.758265 85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1002 20:23:25.812050 85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 ...
I1002 20:23:25.812075 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8: {Name:mke748ae572d29dfd254bc63419d11b8950b520c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.812228 85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8 ...
I1002 20:23:25.812240 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8: {Name:mk7ff83bea87979549e28158b8cc4d11ae273add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:25.812313 85408 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt
I1002 20:23:25.812394 85408 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key
I1002 20:23:25.812446 85408 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key
I1002 20:23:25.812465 85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt with IP's: []
I1002 20:23:26.091720 85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt ...
I1002 20:23:26.091765 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt: {Name:mk0b73dd9fcbf5d26004a2ec947a847ce4340df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:26.091935 85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key ...
I1002 20:23:26.091947 85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key: {Name:mk64f2dd7a04d07bb42e524cc4136dbc291fde1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:23:26.092121 85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
I1002 20:23:26.092166 85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
I1002 20:23:26.092190 85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
I1002 20:23:26.092211 85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
I1002 20:23:26.092797 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1002 20:23:26.111130 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1002 20:23:26.128377 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1002 20:23:26.145383 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1002 20:23:26.163536 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1002 20:23:26.182489 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1002 20:23:26.199978 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1002 20:23:26.217140 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1002 20:23:26.233794 85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1002 20:23:26.253282 85408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1002 20:23:26.265620 85408 ssh_runner.go:195] Run: openssl version
I1002 20:23:26.271633 85408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1002 20:23:26.282710 85408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1002 20:23:26.286383 85408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 2 20:23 /usr/share/ca-certificates/minikubeCA.pem
I1002 20:23:26.286439 85408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1002 20:23:26.319830 85408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1002 20:23:26.328433 85408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1002 20:23:26.332002 85408 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1002 20:23:26.332082 85408 kubeadm.go:400] StartCluster: {Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 20:23:26.332162 85408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1002 20:23:26.332204 85408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1002 20:23:26.359494 85408 cri.go:89] found id: ""
I1002 20:23:26.359578 85408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1002 20:23:26.367639 85408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1002 20:23:26.375646 85408 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 20:23:26.375697 85408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 20:23:26.383527 85408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 20:23:26.383550 85408 kubeadm.go:157] found existing configuration files:
I1002 20:23:26.383592 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 20:23:26.390960 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 20:23:26.391023 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 20:23:26.398055 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 20:23:26.405339 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 20:23:26.405398 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 20:23:26.412346 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 20:23:26.419701 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 20:23:26.419776 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 20:23:26.426922 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 20:23:26.434164 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 20:23:26.434238 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 20:23:26.441701 85408 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 20:23:26.498191 85408 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 20:23:26.553837 85408 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 20:27:30.786627 85408 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1002 20:27:30.786779 85408 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1002 20:27:30.789580 85408 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 20:27:30.789700 85408 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 20:27:30.789858 85408 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 20:27:30.789956 85408 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 20:27:30.790033 85408 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 20:27:30.790109 85408 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 20:27:30.790178 85408 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 20:27:30.790258 85408 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 20:27:30.790343 85408 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 20:27:30.790391 85408 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 20:27:30.790441 85408 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 20:27:30.790483 85408 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 20:27:30.790523 85408 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 20:27:30.790591 85408 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 20:27:30.790673 85408 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 20:27:30.790880 85408 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 20:27:30.790999 85408 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 20:27:30.794072 85408 out.go:252] - Generating certificates and keys ...
I1002 20:27:30.794194 85408 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 20:27:30.794306 85408 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 20:27:30.794367 85408 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1002 20:27:30.794417 85408 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1002 20:27:30.794471 85408 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1002 20:27:30.794540 85408 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1002 20:27:30.794616 85408 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1002 20:27:30.794848 85408 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 20:27:30.794952 85408 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1002 20:27:30.795105 85408 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 20:27:30.795171 85408 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1002 20:27:30.795225 85408 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1002 20:27:30.795263 85408 kubeadm.go:318] [certs] Generating "sa" key and public key
I1002 20:27:30.795315 85408 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 20:27:30.795373 85408 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 20:27:30.795431 85408 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 20:27:30.795487 85408 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 20:27:30.795546 85408 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 20:27:30.795609 85408 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 20:27:30.795676 85408 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 20:27:30.795773 85408 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 20:27:30.797680 85408 out.go:252] - Booting up control plane ...
I1002 20:27:30.797793 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 20:27:30.797879 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 20:27:30.797942 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 20:27:30.798024 85408 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 20:27:30.798097 85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 20:27:30.798178 85408 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 20:27:30.798269 85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 20:27:30.798326 85408 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 20:27:30.798444 85408 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 20:27:30.798536 85408 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 20:27:30.798602 85408 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.887694ms
I1002 20:27:30.798680 85408 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 20:27:30.798784 85408 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 20:27:30.798878 85408 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 20:27:30.798950 85408 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 20:27:30.799011 85408 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
I1002 20:27:30.799082 85408 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
I1002 20:27:30.799139 85408 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
I1002 20:27:30.799145 85408 kubeadm.go:318]
I1002 20:27:30.799221 85408 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 20:27:30.799291 85408 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 20:27:30.799364 85408 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 20:27:30.799446 85408 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 20:27:30.799526 85408 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 20:27:30.799599 85408 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 20:27:30.799628 85408 kubeadm.go:318]
W1002 20:27:30.799823 85408 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.887694ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
[control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.887694ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
[control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
I1002 20:27:30.799913 85408 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I1002 20:27:31.249692 85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1002 20:27:31.262359 85408 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 20:27:31.262411 85408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 20:27:31.270431 85408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 20:27:31.270451 85408 kubeadm.go:157] found existing configuration files:
I1002 20:27:31.270513 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 20:27:31.278494 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 20:27:31.278561 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 20:27:31.285991 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 20:27:31.293609 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 20:27:31.293660 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 20:27:31.301370 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 20:27:31.309321 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 20:27:31.309396 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 20:27:31.317135 85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 20:27:31.324959 85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 20:27:31.325015 85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 20:27:31.332591 85408 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 20:27:31.367560 85408 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 20:27:31.367642 85408 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 20:27:31.388019 85408 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 20:27:31.388130 85408 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 20:27:31.388175 85408 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 20:27:31.388275 85408 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 20:27:31.388370 85408 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 20:27:31.388438 85408 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 20:27:31.388516 85408 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 20:27:31.388583 85408 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 20:27:31.388671 85408 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 20:27:31.388782 85408 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 20:27:31.388876 85408 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 20:27:31.444609 85408 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 20:27:31.444795 85408 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 20:27:31.444986 85408 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 20:27:31.452273 85408 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 20:27:31.456338 85408 out.go:252] - Generating certificates and keys ...
I1002 20:27:31.456445 85408 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 20:27:31.456533 85408 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 20:27:31.456651 85408 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1002 20:27:31.456758 85408 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
I1002 20:27:31.456863 85408 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
I1002 20:27:31.456948 85408 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
I1002 20:27:31.457040 85408 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
I1002 20:27:31.457133 85408 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
I1002 20:27:31.457227 85408 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1002 20:27:31.457341 85408 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1002 20:27:31.457381 85408 kubeadm.go:318] [certs] Using the existing "sa" key
I1002 20:27:31.457440 85408 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 20:27:31.672954 85408 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 20:27:32.025360 85408 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 20:27:32.159044 85408 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 20:27:32.278275 85408 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 20:27:32.381591 85408 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 20:27:32.382085 85408 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 20:27:32.384393 85408 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 20:27:32.387571 85408 out.go:252] - Booting up control plane ...
I1002 20:27:32.387712 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 20:27:32.387806 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 20:27:32.387893 85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 20:27:32.400141 85408 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 20:27:32.400245 85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 20:27:32.406610 85408 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 20:27:32.407056 85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 20:27:32.407273 85408 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 20:27:32.506375 85408 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 20:27:32.506555 85408 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 20:27:33.008296 85408 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.947449ms
I1002 20:27:33.011130 85408 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 20:27:33.011241 85408 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 20:27:33.011320 85408 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 20:27:33.011387 85408 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 20:31:33.011476 85408 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
I1002 20:31:33.011684 85408 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
I1002 20:31:33.011831 85408 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
I1002 20:31:33.011842 85408 kubeadm.go:318]
I1002 20:31:33.011975 85408 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 20:31:33.012102 85408 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 20:31:33.012245 85408 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 20:31:33.012388 85408 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 20:31:33.012490 85408 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 20:31:33.012639 85408 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 20:31:33.012652 85408 kubeadm.go:318]
I1002 20:31:33.015272 85408 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 20:31:33.015445 85408 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 20:31:33.016147 85408 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
I1002 20:31:33.016208 85408 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1002 20:31:33.016314 85408 kubeadm.go:402] duration metric: took 8m6.684244277s to StartCluster
I1002 20:31:33.016377 85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1002 20:31:33.016435 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1002 20:31:33.041911 85408 cri.go:89] found id: ""
I1002 20:31:33.041945 85408 logs.go:282] 0 containers: []
W1002 20:31:33.041953 85408 logs.go:284] No container was found matching "kube-apiserver"
I1002 20:31:33.041959 85408 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1002 20:31:33.042007 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1002 20:31:33.070403 85408 cri.go:89] found id: ""
I1002 20:31:33.070435 85408 logs.go:282] 0 containers: []
W1002 20:31:33.070447 85408 logs.go:284] No container was found matching "etcd"
I1002 20:31:33.070458 85408 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1002 20:31:33.070523 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1002 20:31:33.097185 85408 cri.go:89] found id: ""
I1002 20:31:33.097213 85408 logs.go:282] 0 containers: []
W1002 20:31:33.097221 85408 logs.go:284] No container was found matching "coredns"
I1002 20:31:33.097234 85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1002 20:31:33.097299 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1002 20:31:33.123097 85408 cri.go:89] found id: ""
I1002 20:31:33.123123 85408 logs.go:282] 0 containers: []
W1002 20:31:33.123132 85408 logs.go:284] No container was found matching "kube-scheduler"
I1002 20:31:33.123139 85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1002 20:31:33.123187 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1002 20:31:33.149186 85408 cri.go:89] found id: ""
I1002 20:31:33.149209 85408 logs.go:282] 0 containers: []
W1002 20:31:33.149217 85408 logs.go:284] No container was found matching "kube-proxy"
I1002 20:31:33.149222 85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1002 20:31:33.149271 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1002 20:31:33.173539 85408 cri.go:89] found id: ""
I1002 20:31:33.173566 85408 logs.go:282] 0 containers: []
W1002 20:31:33.173575 85408 logs.go:284] No container was found matching "kube-controller-manager"
I1002 20:31:33.173581 85408 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1002 20:31:33.173628 85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1002 20:31:33.199446 85408 cri.go:89] found id: ""
I1002 20:31:33.199474 85408 logs.go:282] 0 containers: []
W1002 20:31:33.199485 85408 logs.go:284] No container was found matching "kindnet"
I1002 20:31:33.199498 85408 logs.go:123] Gathering logs for kubelet ...
I1002 20:31:33.199514 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1002 20:31:33.266874 85408 logs.go:123] Gathering logs for dmesg ...
I1002 20:31:33.266919 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1002 20:31:33.281732 85408 logs.go:123] Gathering logs for describe nodes ...
I1002 20:31:33.281785 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1002 20:31:33.340504 85408 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1002 20:31:33.331835 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.332360 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.333937 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.334433 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.336143 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1002 20:31:33.331835 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.332360 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.333937 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.334433 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 20:31:33.336143 2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1002 20:31:33.340540 85408 logs.go:123] Gathering logs for CRI-O ...
I1002 20:31:33.340555 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1002 20:31:33.403016 85408 logs.go:123] Gathering logs for container status ...
I1002 20:31:33.403058 85408 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1002 20:31:33.431521 85408 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.947449ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
To see the stack trace of this error execute with --v=5 or higher
W1002 20:31:33.431601 85408 out.go:285] *
*
W1002 20:31:33.431669 85408 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.947449ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.947449ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
To see the stack trace of this error execute with --v=5 or higher
W1002 20:31:33.431682 85408 out.go:285] *
*
W1002 20:31:33.433538 85408 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 20:31:33.437657 85408 out.go:203]
W1002 20:31:33.439276 85408 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.947449ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.947449ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
To see the stack trace of this error execute with --v=5 or higher
W1002 20:31:33.439300 85408 out.go:285] *
*
I1002 20:31:33.441966 85408 out.go:203]
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (514.80s)