=== RUN TestAddons/Setup
addons_test.go:108: (dbg) Run: out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m33.017396809s)
-- stdout --
* [addons-252051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21643
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "addons-252051" primary control-plane node in "addons-252051" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
-- /stdout --
** stderr **
I1002 06:05:44.420498 145688 out.go:360] Setting OutFile to fd 1 ...
I1002 06:05:44.420797 145688 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:05:44.420808 145688 out.go:374] Setting ErrFile to fd 2...
I1002 06:05:44.420814 145688 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:05:44.421029 145688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-140751/.minikube/bin
I1002 06:05:44.421634 145688 out.go:368] Setting JSON to false
I1002 06:05:44.422656 145688 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2894,"bootTime":1759382250,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1002 06:05:44.422772 145688 start.go:140] virtualization: kvm guest
I1002 06:05:44.426360 145688 out.go:179] * [addons-252051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1002 06:05:44.428593 145688 notify.go:220] Checking for updates...
I1002 06:05:44.428624 145688 out.go:179] - MINIKUBE_LOCATION=21643
I1002 06:05:44.430498 145688 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 06:05:44.432408 145688 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21643-140751/kubeconfig
I1002 06:05:44.433584 145688 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-140751/.minikube
I1002 06:05:44.435066 145688 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1002 06:05:44.436424 145688 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 06:05:44.437826 145688 driver.go:421] Setting default libvirt URI to qemu:///system
I1002 06:05:44.461638 145688 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
I1002 06:05:44.461810 145688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:05:44.527957 145688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:44.516780905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:05:44.528074 145688 docker.go:318] overlay module found
I1002 06:05:44.530090 145688 out.go:179] * Using the docker driver based on user configuration
I1002 06:05:44.531524 145688 start.go:304] selected driver: docker
I1002 06:05:44.531539 145688 start.go:924] validating driver "docker" against <nil>
I1002 06:05:44.531552 145688 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 06:05:44.532157 145688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:05:44.593608 145688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-02 06:05:44.583084502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:05:44.593801 145688 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1002 06:05:44.593988 145688 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1002 06:05:44.595877 145688 out.go:179] * Using Docker driver with root privileges
I1002 06:05:44.597417 145688 cni.go:84] Creating CNI manager for ""
I1002 06:05:44.597474 145688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 06:05:44.597489 145688 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1002 06:05:44.597579 145688 start.go:348] cluster config:
{Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
I1002 06:05:44.599269 145688 out.go:179] * Starting "addons-252051" primary control-plane node in "addons-252051" cluster
I1002 06:05:44.600521 145688 cache.go:123] Beginning downloading kic base image for docker with crio
I1002 06:05:44.601903 145688 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
I1002 06:05:44.603315 145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:05:44.603374 145688 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1002 06:05:44.603383 145688 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
I1002 06:05:44.603396 145688 cache.go:58] Caching tarball of preloaded images
I1002 06:05:44.603496 145688 preload.go:233] Found /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1002 06:05:44.603509 145688 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1002 06:05:44.603853 145688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json ...
I1002 06:05:44.603879 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json: {Name:mk5d4751732ada5e94cbee24060b407e17b31003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:05:44.622333 145688 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
I1002 06:05:44.622473 145688 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
I1002 06:05:44.622494 145688 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
I1002 06:05:44.622501 145688 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
I1002 06:05:44.622511 145688 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
I1002 06:05:44.622518 145688 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
I1002 06:05:58.061127 145688 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
I1002 06:05:58.061181 145688 cache.go:232] Successfully downloaded all kic artifacts
I1002 06:05:58.061255 145688 start.go:360] acquireMachinesLock for addons-252051: {Name:mk9a81aa2f8d4b95c2a97084fadbb2c481c32536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 06:05:58.062069 145688 start.go:364] duration metric: took 780.151µs to acquireMachinesLock for "addons-252051"
I1002 06:05:58.062115 145688 start.go:93] Provisioning new machine with config: &{Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1002 06:05:58.062190 145688 start.go:125] createHost starting for "" (driver="docker")
I1002 06:05:58.136522 145688 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1002 06:05:58.136875 145688 start.go:159] libmachine.API.Create for "addons-252051" (driver="docker")
I1002 06:05:58.136909 145688 client.go:168] LocalClient.Create starting
I1002 06:05:58.137145 145688 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem
I1002 06:05:58.345324 145688 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem
I1002 06:05:58.572530 145688 cli_runner.go:164] Run: docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 06:05:58.590309 145688 cli_runner.go:211] docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 06:05:58.590392 145688 network_create.go:284] running [docker network inspect addons-252051] to gather additional debugging logs...
I1002 06:05:58.590414 145688 cli_runner.go:164] Run: docker network inspect addons-252051
W1002 06:05:58.606810 145688 cli_runner.go:211] docker network inspect addons-252051 returned with exit code 1
I1002 06:05:58.606838 145688 network_create.go:287] error running [docker network inspect addons-252051]: docker network inspect addons-252051: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-252051 not found
I1002 06:05:58.606853 145688 network_create.go:289] output of [docker network inspect addons-252051]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-252051 not found
** /stderr **
I1002 06:05:58.606963 145688 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 06:05:58.625462 145688 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b42350}
I1002 06:05:58.625529 145688 network_create.go:124] attempt to create docker network addons-252051 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 06:05:58.625591 145688 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-252051 addons-252051
I1002 06:05:58.684102 145688 network_create.go:108] docker network addons-252051 192.168.49.0/24 created
I1002 06:05:58.684143 145688 kic.go:121] calculated static IP "192.168.49.2" for the "addons-252051" container
I1002 06:05:58.684220 145688 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1002 06:05:58.700763 145688 cli_runner.go:164] Run: docker volume create addons-252051 --label name.minikube.sigs.k8s.io=addons-252051 --label created_by.minikube.sigs.k8s.io=true
I1002 06:05:58.721914 145688 oci.go:103] Successfully created a docker volume addons-252051
I1002 06:05:58.721995 145688 cli_runner.go:164] Run: docker run --rm --name addons-252051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --entrypoint /usr/bin/test -v addons-252051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
I1002 06:06:00.789844 145688 cli_runner.go:217] Completed: docker run --rm --name addons-252051-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --entrypoint /usr/bin/test -v addons-252051:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.06780021s)
I1002 06:06:00.789879 145688 oci.go:107] Successfully prepared a docker volume addons-252051
I1002 06:06:00.789896 145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:06:00.789917 145688 kic.go:194] Starting extracting preloaded images to volume ...
I1002 06:06:00.789977 145688 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-252051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
I1002 06:06:05.224845 145688 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-140751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-252051:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.434828783s)
I1002 06:06:05.224878 145688 kic.go:203] duration metric: took 4.434958737s to extract preloaded images to volume ...
W1002 06:06:05.224970 145688 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1002 06:06:05.225000 145688 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1002 06:06:05.225036 145688 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1002 06:06:05.278308 145688 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-252051 --name addons-252051 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-252051 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-252051 --network addons-252051 --ip 192.168.49.2 --volume addons-252051:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
I1002 06:06:05.576052 145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Running}}
I1002 06:06:05.595581 145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
I1002 06:06:05.614836 145688 cli_runner.go:164] Run: docker exec addons-252051 stat /var/lib/dpkg/alternatives/iptables
I1002 06:06:05.661042 145688 oci.go:144] the created container "addons-252051" has a running status.
I1002 06:06:05.661082 145688 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa...
I1002 06:06:06.081440 145688 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1002 06:06:06.109936 145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
I1002 06:06:06.129218 145688 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1002 06:06:06.129247 145688 kic_runner.go:114] Args: [docker exec --privileged addons-252051 chown docker:docker /home/docker/.ssh/authorized_keys]
I1002 06:06:06.180965 145688 cli_runner.go:164] Run: docker container inspect addons-252051 --format={{.State.Status}}
I1002 06:06:06.200410 145688 machine.go:93] provisionDockerMachine start ...
I1002 06:06:06.200528 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:06.219144 145688 main.go:141] libmachine: Using SSH client type: native
I1002 06:06:06.219465 145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 06:06:06.219479 145688 main.go:141] libmachine: About to run SSH command:
hostname
I1002 06:06:06.366768 145688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-252051
I1002 06:06:06.366795 145688 ubuntu.go:182] provisioning hostname "addons-252051"
I1002 06:06:06.366858 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:06.385062 145688 main.go:141] libmachine: Using SSH client type: native
I1002 06:06:06.385301 145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 06:06:06.385318 145688 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-252051 && echo "addons-252051" | sudo tee /etc/hostname
I1002 06:06:06.541099 145688 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-252051
I1002 06:06:06.541179 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:06.559944 145688 main.go:141] libmachine: Using SSH client type: native
I1002 06:06:06.560176 145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 06:06:06.560192 145688 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-252051' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-252051/g' /etc/hosts;
else
echo '127.0.1.1 addons-252051' | sudo tee -a /etc/hosts;
fi
fi
I1002 06:06:06.707405 145688 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1002 06:06:06.707459 145688 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-140751/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-140751/.minikube}
I1002 06:06:06.707480 145688 ubuntu.go:190] setting up certificates
I1002 06:06:06.707491 145688 provision.go:84] configureAuth start
I1002 06:06:06.707544 145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
I1002 06:06:06.725026 145688 provision.go:143] copyHostCerts
I1002 06:06:06.725116 145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/ca.pem (1078 bytes)
I1002 06:06:06.725246 145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/cert.pem (1123 bytes)
I1002 06:06:06.725327 145688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-140751/.minikube/key.pem (1679 bytes)
I1002 06:06:06.725414 145688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem org=jenkins.addons-252051 san=[127.0.0.1 192.168.49.2 addons-252051 localhost minikube]
I1002 06:06:07.221839 145688 provision.go:177] copyRemoteCerts
I1002 06:06:07.221909 145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1002 06:06:07.221948 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.239551 145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
I1002 06:06:07.343116 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1002 06:06:07.363759 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1002 06:06:07.383330 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1002 06:06:07.401770 145688 provision.go:87] duration metric: took 694.261659ms to configureAuth
I1002 06:06:07.401800 145688 ubuntu.go:206] setting minikube options for container-runtime
I1002 06:06:07.402118 145688 config.go:182] Loaded profile config "addons-252051": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 06:06:07.402290 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.421105 145688 main.go:141] libmachine: Using SSH client type: native
I1002 06:06:07.421316 145688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I1002 06:06:07.421333 145688 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1002 06:06:07.688286 145688 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1002 06:06:07.688307 145688 machine.go:96] duration metric: took 1.487869078s to provisionDockerMachine
I1002 06:06:07.688317 145688 client.go:171] duration metric: took 9.551402203s to LocalClient.Create
I1002 06:06:07.688335 145688 start.go:167] duration metric: took 9.551462175s to libmachine.API.Create "addons-252051"
I1002 06:06:07.688358 145688 start.go:293] postStartSetup for "addons-252051" (driver="docker")
I1002 06:06:07.688372 145688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1002 06:06:07.688437 145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1002 06:06:07.688485 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.706398 145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
I1002 06:06:07.812010 145688 ssh_runner.go:195] Run: cat /etc/os-release
I1002 06:06:07.815910 145688 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1002 06:06:07.815936 145688 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1002 06:06:07.815947 145688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/addons for local assets ...
I1002 06:06:07.816014 145688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-140751/.minikube/files for local assets ...
I1002 06:06:07.816041 145688 start.go:296] duration metric: took 127.675445ms for postStartSetup
I1002 06:06:07.816363 145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
I1002 06:06:07.834303 145688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/config.json ...
I1002 06:06:07.834627 145688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1002 06:06:07.834677 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.852766 145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
I1002 06:06:07.952864 145688 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1002 06:06:07.957802 145688 start.go:128] duration metric: took 9.895593261s to createHost
I1002 06:06:07.957834 145688 start.go:83] releasing machines lock for "addons-252051", held for 9.895738171s
I1002 06:06:07.957915 145688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-252051
I1002 06:06:07.975632 145688 ssh_runner.go:195] Run: cat /version.json
I1002 06:06:07.975682 145688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1002 06:06:07.975690 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.975759 145688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-252051
I1002 06:06:07.994386 145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
I1002 06:06:07.994894 145688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21643-140751/.minikube/machines/addons-252051/id_rsa Username:docker}
I1002 06:06:08.094185 145688 ssh_runner.go:195] Run: systemctl --version
I1002 06:06:08.146110 145688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1002 06:06:08.182023 145688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1002 06:06:08.186800 145688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1002 06:06:08.186861 145688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1002 06:06:08.214479 145688 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1002 06:06:08.214508 145688 start.go:495] detecting cgroup driver to use...
I1002 06:06:08.214543 145688 detect.go:190] detected "systemd" cgroup driver on host os
I1002 06:06:08.214597 145688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1002 06:06:08.231820 145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1002 06:06:08.244789 145688 docker.go:218] disabling cri-docker service (if available) ...
I1002 06:06:08.244851 145688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1002 06:06:08.262315 145688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1002 06:06:08.280855 145688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1002 06:06:08.364446 145688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1002 06:06:08.455286 145688 docker.go:234] disabling docker service ...
I1002 06:06:08.455378 145688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1002 06:06:08.475423 145688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1002 06:06:08.488843 145688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1002 06:06:08.572447 145688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1002 06:06:08.655003 145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1002 06:06:08.668115 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1002 06:06:08.683855 145688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1002 06:06:08.683939 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.695223 145688 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
I1002 06:06:08.695309 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.705078 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.714369 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.723497 145688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1002 06:06:08.732007 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.740909 145688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.755463 145688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1002 06:06:08.764797 145688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1002 06:06:08.772643 145688 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1002 06:06:08.772708 145688 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1002 06:06:08.786418 145688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1002 06:06:08.794696 145688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 06:06:08.872453 145688 ssh_runner.go:195] Run: sudo systemctl restart crio
I1002 06:06:08.985016 145688 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1002 06:06:08.985123 145688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1002 06:06:08.989249 145688 start.go:563] Will wait 60s for crictl version
I1002 06:06:08.989320 145688 ssh_runner.go:195] Run: which crictl
I1002 06:06:08.992962 145688 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1002 06:06:09.019008 145688 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.34.1
RuntimeApiVersion: v1
I1002 06:06:09.019133 145688 ssh_runner.go:195] Run: crio --version
I1002 06:06:09.049625 145688 ssh_runner.go:195] Run: crio --version
I1002 06:06:09.081463 145688 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
I1002 06:06:09.083000 145688 cli_runner.go:164] Run: docker network inspect addons-252051 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 06:06:09.100290 145688 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1002 06:06:09.104830 145688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 06:06:09.115656 145688 kubeadm.go:883] updating cluster {Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1002 06:06:09.115783 145688 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:06:09.115824 145688 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 06:06:09.149035 145688 crio.go:514] all images are preloaded for cri-o runtime.
I1002 06:06:09.149058 145688 crio.go:433] Images already preloaded, skipping extraction
I1002 06:06:09.149104 145688 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 06:06:09.175165 145688 crio.go:514] all images are preloaded for cri-o runtime.
I1002 06:06:09.175188 145688 cache_images.go:85] Images are preloaded, skipping loading
I1002 06:06:09.175195 145688 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
I1002 06:06:09.175280 145688 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-252051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1002 06:06:09.175340 145688 ssh_runner.go:195] Run: crio config
I1002 06:06:09.222285 145688 cni.go:84] Creating CNI manager for ""
I1002 06:06:09.222307 145688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
I1002 06:06:09.222331 145688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1002 06:06:09.222378 145688 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-252051 NodeName:addons-252051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1002 06:06:09.222537 145688 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-252051"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1002 06:06:09.222613 145688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1002 06:06:09.231321 145688 binaries.go:44] Found k8s binaries, skipping transfer
I1002 06:06:09.231421 145688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1002 06:06:09.239657 145688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1002 06:06:09.253091 145688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1002 06:06:09.270005 145688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
I1002 06:06:09.283679 145688 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1002 06:06:09.288145 145688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1002 06:06:09.299059 145688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 06:06:09.378660 145688 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1002 06:06:09.402007 145688 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051 for IP: 192.168.49.2
I1002 06:06:09.402029 145688 certs.go:195] generating shared ca certs ...
I1002 06:06:09.402049 145688 certs.go:227] acquiring lock for ca certs: {Name:mkf3b32ac69dfaeb6f176a29e597a8bbd1d6f8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:09.402904 145688 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key
I1002 06:06:09.591461 145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt ...
I1002 06:06:09.591494 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt: {Name:mk4d248a38294b99e755d8c8cff50a7bc6d6509e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:09.592425 145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key ...
I1002 06:06:09.592446 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key: {Name:mkc73b365bb7ee8cbaa90a9d2769cf11c83c976d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:09.593026 145688 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key
I1002 06:06:09.770572 145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt ...
I1002 06:06:09.770621 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt: {Name:mkd63ca89d0519b2e8fb31d8fc2fe7d0ebf6f596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:09.771672 145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key ...
I1002 06:06:09.771708 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key: {Name:mk4b30402ad120e3c6d37beb8006dbdd07c4172b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:09.771862 145688 certs.go:257] generating profile certs ...
I1002 06:06:09.771948 145688 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key
I1002 06:06:09.771970 145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt with IP's: []
I1002 06:06:10.196731 145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt ...
I1002 06:06:10.196772 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.crt: {Name:mk757c9de6de681e7590d8d8be2fae3f9735fc64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.196986 145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key ...
I1002 06:06:10.197005 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/client.key: {Name:mka12cbcd4f9cb5907dbf0015f5b7b72590537af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.197120 145688 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594
I1002 06:06:10.197149 145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1002 06:06:10.283393 145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 ...
I1002 06:06:10.283435 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594: {Name:mk25591e9b597f4a91c140dc58d7e9ab8ae50496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.283676 145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594 ...
I1002 06:06:10.283701 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594: {Name:mk95759e8383b8cae3c6e3f5ebbfb0d687325d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.283824 145688 certs.go:382] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt.76b13594 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt
I1002 06:06:10.283948 145688 certs.go:386] copying /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key.76b13594 -> /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key
I1002 06:06:10.284037 145688 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key
I1002 06:06:10.284069 145688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt with IP's: []
I1002 06:06:10.336033 145688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt ...
I1002 06:06:10.336073 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt: {Name:mk966eee17a57ad90383c1687c53c8b271f5434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.337044 145688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key ...
I1002 06:06:10.337077 145688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key: {Name:mkd0f8f032dd09c4f57a659cb0da0bac0fef7bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 06:06:10.337955 145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca-key.pem (1675 bytes)
I1002 06:06:10.338011 145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/ca.pem (1078 bytes)
I1002 06:06:10.338051 145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/cert.pem (1123 bytes)
I1002 06:06:10.338089 145688 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-140751/.minikube/certs/key.pem (1679 bytes)
I1002 06:06:10.338733 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1002 06:06:10.357919 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1002 06:06:10.376224 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1002 06:06:10.394681 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1002 06:06:10.412769 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1002 06:06:10.431155 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1002 06:06:10.449124 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1002 06:06:10.467745 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/profiles/addons-252051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1002 06:06:10.486435 145688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-140751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1002 06:06:10.506561 145688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1002 06:06:10.520280 145688 ssh_runner.go:195] Run: openssl version
I1002 06:06:10.526903 145688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1002 06:06:10.538933 145688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1002 06:06:10.543299 145688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 2 06:06 /usr/share/ca-certificates/minikubeCA.pem
I1002 06:06:10.543374 145688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1002 06:06:10.578431 145688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1002 06:06:10.588098 145688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1002 06:06:10.592144 145688 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1002 06:06:10.592212 145688 kubeadm.go:400] StartCluster: {Name:addons-252051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-252051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 06:06:10.592297 145688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1002 06:06:10.592356 145688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1002 06:06:10.620742 145688 cri.go:89] found id: ""
I1002 06:06:10.620814 145688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1002 06:06:10.629236 145688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1002 06:06:10.637691 145688 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 06:06:10.637752 145688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 06:06:10.646143 145688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 06:06:10.646162 145688 kubeadm.go:157] found existing configuration files:
I1002 06:06:10.646217 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 06:06:10.654160 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 06:06:10.654227 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 06:06:10.662576 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 06:06:10.670722 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 06:06:10.670788 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 06:06:10.680123 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 06:06:10.688651 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 06:06:10.688732 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 06:06:10.697558 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 06:06:10.705925 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 06:06:10.706025 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 06:06:10.713776 145688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 06:06:10.754395 145688 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 06:06:10.754476 145688 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 06:06:10.774890 145688 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 06:06:10.774961 145688 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 06:06:10.774998 145688 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 06:06:10.775056 145688 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 06:06:10.775130 145688 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 06:06:10.775196 145688 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 06:06:10.775273 145688 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 06:06:10.775385 145688 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 06:06:10.775480 145688 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 06:06:10.775555 145688 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 06:06:10.775627 145688 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 06:06:10.848242 145688 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 06:06:10.848373 145688 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 06:06:10.848547 145688 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 06:06:10.857116 145688 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 06:06:10.859497 145688 out.go:252] - Generating certificates and keys ...
I1002 06:06:10.859603 145688 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 06:06:10.859714 145688 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 06:06:10.942296 145688 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1002 06:06:11.217110 145688 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1002 06:06:11.890215 145688 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1002 06:06:12.129227 145688 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1002 06:06:12.308573 145688 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1002 06:06:12.308760 145688 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 06:06:12.602430 145688 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1002 06:06:12.602602 145688 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1002 06:06:12.887307 145688 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1002 06:06:13.013841 145688 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1002 06:06:13.056254 145688 kubeadm.go:318] [certs] Generating "sa" key and public key
I1002 06:06:13.056391 145688 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 06:06:13.122709 145688 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 06:06:13.356729 145688 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 06:06:13.557636 145688 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 06:06:13.649479 145688 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 06:06:13.765803 145688 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 06:06:13.766449 145688 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 06:06:13.770788 145688 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 06:06:13.772561 145688 out.go:252] - Booting up control plane ...
I1002 06:06:13.772660 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 06:06:13.772731 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 06:06:13.774562 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 06:06:13.800468 145688 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 06:06:13.800595 145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 06:06:13.807843 145688 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 06:06:13.808093 145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 06:06:13.808153 145688 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 06:06:13.910133 145688 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 06:06:13.910296 145688 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 06:06:14.411224 145688 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.245086ms
I1002 06:06:14.414470 145688 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 06:06:14.414602 145688 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 06:06:14.414759 145688 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 06:06:14.414877 145688 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 06:10:14.415640 145688 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
I1002 06:10:14.415998 145688 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
I1002 06:10:14.416197 145688 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
I1002 06:10:14.416214 145688 kubeadm.go:318]
I1002 06:10:14.416534 145688 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 06:10:14.416818 145688 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 06:10:14.417040 145688 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 06:10:14.417303 145688 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 06:10:14.417522 145688 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 06:10:14.417701 145688 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 06:10:14.417716 145688 kubeadm.go:318]
I1002 06:10:14.420658 145688 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 06:10:14.420898 145688 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 06:10:14.421707 145688 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
I1002 06:10:14.421877 145688 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
W1002 06:10:14.422033 145688 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.245086ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
[control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
[control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [addons-252051 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.245086ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000401297s
[control-plane-check] kube-apiserver is not healthy after 4m0.000481371s
[control-plane-check] kube-scheduler is not healthy after 4m0.000773344s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
I1002 06:10:14.422132 145688 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I1002 06:10:14.871555 145688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1002 06:10:14.884523 145688 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1002 06:10:14.884594 145688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1002 06:10:14.893155 145688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1002 06:10:14.893178 145688 kubeadm.go:157] found existing configuration files:
I1002 06:10:14.893233 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1002 06:10:14.901377 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1002 06:10:14.901449 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1002 06:10:14.909646 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1002 06:10:14.918103 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1002 06:10:14.918174 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1002 06:10:14.925791 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1002 06:10:14.933476 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1002 06:10:14.933539 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1002 06:10:14.941100 145688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1002 06:10:14.949190 145688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1002 06:10:14.949246 145688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1002 06:10:14.956629 145688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1002 06:10:14.995498 145688 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1002 06:10:14.995602 145688 kubeadm.go:318] [preflight] Running pre-flight checks
I1002 06:10:15.016580 145688 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1002 06:10:15.016678 145688 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1002 06:10:15.016723 145688 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1002 06:10:15.016807 145688 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1002 06:10:15.016878 145688 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1002 06:10:15.016942 145688 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1002 06:10:15.017023 145688 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1002 06:10:15.017118 145688 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1002 06:10:15.017207 145688 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1002 06:10:15.017303 145688 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1002 06:10:15.017390 145688 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1002 06:10:15.079874 145688 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1002 06:10:15.080051 145688 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1002 06:10:15.080219 145688 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1002 06:10:15.087206 145688 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1002 06:10:15.091031 145688 out.go:252] - Generating certificates and keys ...
I1002 06:10:15.091121 145688 kubeadm.go:318] [certs] Using existing ca certificate authority
I1002 06:10:15.091182 145688 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1002 06:10:15.091252 145688 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1002 06:10:15.091309 145688 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
I1002 06:10:15.091428 145688 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
I1002 06:10:15.091523 145688 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
I1002 06:10:15.091584 145688 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
I1002 06:10:15.091649 145688 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
I1002 06:10:15.091758 145688 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1002 06:10:15.091875 145688 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1002 06:10:15.091960 145688 kubeadm.go:318] [certs] Using the existing "sa" key
I1002 06:10:15.092048 145688 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1002 06:10:15.345431 145688 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1002 06:10:15.456733 145688 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1002 06:10:15.592218 145688 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1002 06:10:16.060552 145688 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1002 06:10:16.300214 145688 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1002 06:10:16.300613 145688 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1002 06:10:16.303798 145688 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1002 06:10:16.306923 145688 out.go:252] - Booting up control plane ...
I1002 06:10:16.307077 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1002 06:10:16.307174 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1002 06:10:16.307291 145688 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1002 06:10:16.321430 145688 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1002 06:10:16.321595 145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1002 06:10:16.328972 145688 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1002 06:10:16.329143 145688 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1002 06:10:16.329198 145688 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1002 06:10:16.438487 145688 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1002 06:10:16.438668 145688 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1002 06:10:16.940338 145688 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.032517ms
I1002 06:10:16.943353 145688 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1002 06:10:16.943483 145688 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
I1002 06:10:16.943597 145688 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1002 06:10:16.943699 145688 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1002 06:14:16.945209 145688 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
I1002 06:14:16.945593 145688 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
I1002 06:14:16.945805 145688 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
I1002 06:14:16.945866 145688 kubeadm.go:318]
I1002 06:14:16.946034 145688 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
I1002 06:14:16.946241 145688 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1002 06:14:16.946418 145688 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
I1002 06:14:16.946583 145688 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
I1002 06:14:16.946713 145688 kubeadm.go:318] Once you have found the failing container, you can inspect its logs with:
I1002 06:14:16.946912 145688 kubeadm.go:318] - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
I1002 06:14:16.946929 145688 kubeadm.go:318]
I1002 06:14:16.949805 145688 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1002 06:14:16.949941 145688 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1002 06:14:16.950600 145688 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
I1002 06:14:16.950726 145688 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
I1002 06:14:16.950801 145688 kubeadm.go:402] duration metric: took 8m6.358592971s to StartCluster
I1002 06:14:16.950977 145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1002 06:14:16.951077 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1002 06:14:16.979277 145688 cri.go:89] found id: ""
I1002 06:14:16.979328 145688 logs.go:282] 0 containers: []
W1002 06:14:16.979370 145688 logs.go:284] No container was found matching "kube-apiserver"
I1002 06:14:16.979386 145688 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1002 06:14:16.979445 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1002 06:14:17.006074 145688 cri.go:89] found id: ""
I1002 06:14:17.006113 145688 logs.go:282] 0 containers: []
W1002 06:14:17.006124 145688 logs.go:284] No container was found matching "etcd"
I1002 06:14:17.006136 145688 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1002 06:14:17.006196 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1002 06:14:17.032581 145688 cri.go:89] found id: ""
I1002 06:14:17.032609 145688 logs.go:282] 0 containers: []
W1002 06:14:17.032618 145688 logs.go:284] No container was found matching "coredns"
I1002 06:14:17.032623 145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1002 06:14:17.032672 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1002 06:14:17.059155 145688 cri.go:89] found id: ""
I1002 06:14:17.059178 145688 logs.go:282] 0 containers: []
W1002 06:14:17.059186 145688 logs.go:284] No container was found matching "kube-scheduler"
I1002 06:14:17.059192 145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1002 06:14:17.059237 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1002 06:14:17.086243 145688 cri.go:89] found id: ""
I1002 06:14:17.086271 145688 logs.go:282] 0 containers: []
W1002 06:14:17.086282 145688 logs.go:284] No container was found matching "kube-proxy"
I1002 06:14:17.086292 145688 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1002 06:14:17.086389 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1002 06:14:17.113888 145688 cri.go:89] found id: ""
I1002 06:14:17.113912 145688 logs.go:282] 0 containers: []
W1002 06:14:17.113920 145688 logs.go:284] No container was found matching "kube-controller-manager"
I1002 06:14:17.113925 145688 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1002 06:14:17.113972 145688 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1002 06:14:17.140880 145688 cri.go:89] found id: ""
I1002 06:14:17.140904 145688 logs.go:282] 0 containers: []
W1002 06:14:17.140912 145688 logs.go:284] No container was found matching "kindnet"
I1002 06:14:17.140922 145688 logs.go:123] Gathering logs for kubelet ...
I1002 06:14:17.140933 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1002 06:14:17.209243 145688 logs.go:123] Gathering logs for dmesg ...
I1002 06:14:17.209279 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1002 06:14:17.221493 145688 logs.go:123] Gathering logs for describe nodes ...
I1002 06:14:17.221532 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1002 06:14:17.282784 145688 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1002 06:14:17.275065 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.275648 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.277185 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.277587 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.279100 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1002 06:14:17.275065 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.275648 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.277185 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.277587 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1002 06:14:17.279100 2382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1002 06:14:17.282815 145688 logs.go:123] Gathering logs for CRI-O ...
I1002 06:14:17.282826 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1002 06:14:17.346460 145688 logs.go:123] Gathering logs for container status ...
I1002 06:14:17.346504 145688 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1002 06:14:17.377299 145688 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.032517ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 06:14:17.377366 145688 out.go:285] *
*
W1002 06:14:17.377439 145688 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.032517ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.032517ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 06:14:17.377454 145688 out.go:285] *
*
W1002 06:14:17.379225 145688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 06:14:17.382881 145688 out.go:203]
W1002 06:14:17.384205 145688 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.032517ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_IO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.032517ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001000265s
[control-plane-check] kube-scheduler is not healthy after 4m0.001108275s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001318373s
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher
W1002 06:14:17.384234 145688 out.go:285] *
*
I1002 06:14:17.385671 145688 out.go:203]
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-252051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (513.05s)