=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E1227 10:11:55.168043 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m20.207805904s)
-- stdout --
* [force-systemd-flag-027208] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22343
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-027208" primary control-plane node in "force-systemd-flag-027208" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I1227 10:10:14.060682 3738115 out.go:360] Setting OutFile to fd 1 ...
I1227 10:10:14.060840 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 10:10:14.060853 3738115 out.go:374] Setting ErrFile to fd 2...
I1227 10:10:14.060859 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 10:10:14.061129 3738115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 10:10:14.061557 3738115 out.go:368] Setting JSON to false
I1227 10:10:14.062452 3738115 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57166,"bootTime":1766773048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1227 10:10:14.062522 3738115 start.go:143] virtualization:
I1227 10:10:14.066189 3738115 out.go:179] * [force-systemd-flag-027208] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 10:10:14.070968 3738115 out.go:179] - MINIKUBE_LOCATION=22343
I1227 10:10:14.071126 3738115 notify.go:221] Checking for updates...
I1227 10:10:14.077634 3738115 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 10:10:14.080928 3738115 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
I1227 10:10:14.084146 3738115 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
I1227 10:10:14.087414 3738115 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 10:10:14.090571 3738115 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 10:10:14.094274 3738115 config.go:182] Loaded profile config "force-systemd-env-194624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 10:10:14.094431 3738115 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 10:10:14.131713 3738115 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 10:10:14.131835 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 10:10:14.222716 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.212351353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 10:10:14.222833 3738115 docker.go:319] overlay module found
I1227 10:10:14.226201 3738115 out.go:179] * Using the docker driver based on user configuration
I1227 10:10:14.229183 3738115 start.go:309] selected driver: docker
I1227 10:10:14.229209 3738115 start.go:928] validating driver "docker" against <nil>
I1227 10:10:14.229223 3738115 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 10:10:14.229983 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 10:10:14.283479 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.273728372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 10:10:14.283631 3738115 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 10:10:14.283847 3738115 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 10:10:14.286995 3738115 out.go:179] * Using Docker driver with root privileges
I1227 10:10:14.290011 3738115 cni.go:84] Creating CNI manager for ""
I1227 10:10:14.290080 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 10:10:14.290097 3738115 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 10:10:14.290178 3738115 start.go:353] cluster config:
{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 10:10:14.293396 3738115 out.go:179] * Starting "force-systemd-flag-027208" primary control-plane node in "force-systemd-flag-027208" cluster
I1227 10:10:14.296262 3738115 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 10:10:14.299201 3738115 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 10:10:14.302027 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:10:14.302080 3738115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1227 10:10:14.302089 3738115 cache.go:65] Caching tarball of preloaded images
I1227 10:10:14.302190 3738115 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 10:10:14.302205 3738115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1227 10:10:14.302312 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
I1227 10:10:14.302339 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json: {Name:mk8e499633705fb35f3a63ac14b480b9b5477cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:14.302514 3738115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 10:10:14.324411 3738115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 10:10:14.324434 3738115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 10:10:14.324451 3738115 cache.go:243] Successfully downloaded all kic artifacts
I1227 10:10:14.324490 3738115 start.go:360] acquireMachinesLock for force-systemd-flag-027208: {Name:mk408a0d777415c6b3bf75190db8aa17e71bedcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 10:10:14.324601 3738115 start.go:364] duration metric: took 89.656µs to acquireMachinesLock for "force-systemd-flag-027208"
I1227 10:10:14.324631 3738115 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 10:10:14.324705 3738115 start.go:125] createHost starting for "" (driver="docker")
I1227 10:10:14.328143 3738115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 10:10:14.328386 3738115 start.go:159] libmachine.API.Create for "force-systemd-flag-027208" (driver="docker")
I1227 10:10:14.328425 3738115 client.go:173] LocalClient.Create starting
I1227 10:10:14.328500 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
I1227 10:10:14.328539 3738115 main.go:144] libmachine: Decoding PEM data...
I1227 10:10:14.328557 3738115 main.go:144] libmachine: Parsing certificate...
I1227 10:10:14.328611 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
I1227 10:10:14.328633 3738115 main.go:144] libmachine: Decoding PEM data...
I1227 10:10:14.328646 3738115 main.go:144] libmachine: Parsing certificate...
I1227 10:10:14.329018 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 10:10:14.345559 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 10:10:14.345658 3738115 network_create.go:284] running [docker network inspect force-systemd-flag-027208] to gather additional debugging logs...
I1227 10:10:14.345680 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208
W1227 10:10:14.361855 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 returned with exit code 1
I1227 10:10:14.361884 3738115 network_create.go:287] error running [docker network inspect force-systemd-flag-027208]: docker network inspect force-systemd-flag-027208: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-027208 not found
I1227 10:10:14.361897 3738115 network_create.go:289] output of [docker network inspect force-systemd-flag-027208]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-027208 not found
** /stderr **
I1227 10:10:14.362011 3738115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 10:10:14.379980 3738115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
I1227 10:10:14.380333 3738115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
I1227 10:10:14.380708 3738115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
I1227 10:10:14.380950 3738115 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07a37a22614 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:04:fd:b9:e2:9a} reservation:<nil>}
I1227 10:10:14.381366 3738115 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1ce0}
I1227 10:10:14.381389 3738115 network_create.go:124] attempt to create docker network force-systemd-flag-027208 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1227 10:10:14.381445 3738115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-027208 force-systemd-flag-027208
I1227 10:10:14.441506 3738115 network_create.go:108] docker network force-systemd-flag-027208 192.168.85.0/24 created
I1227 10:10:14.441539 3738115 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-027208" container
I1227 10:10:14.441612 3738115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 10:10:14.457713 3738115 cli_runner.go:164] Run: docker volume create force-systemd-flag-027208 --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true
I1227 10:10:14.476328 3738115 oci.go:103] Successfully created a docker volume force-systemd-flag-027208
I1227 10:10:14.476443 3738115 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-027208-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --entrypoint /usr/bin/test -v force-systemd-flag-027208:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 10:10:15.042844 3738115 oci.go:107] Successfully prepared a docker volume force-systemd-flag-027208
I1227 10:10:15.042916 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:10:15.042928 3738115 kic.go:194] Starting extracting preloaded images to volume ...
I1227 10:10:15.043044 3738115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 10:10:18.934663 3738115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.891575702s)
I1227 10:10:18.934700 3738115 kic.go:203] duration metric: took 3.891766533s to extract preloaded images to volume ...
W1227 10:10:18.934838 3738115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 10:10:18.934972 3738115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 10:10:18.984807 3738115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-027208 --name force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-027208 --network force-systemd-flag-027208 --ip 192.168.85.2 --volume force-systemd-flag-027208:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 10:10:19.288318 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Running}}
I1227 10:10:19.312460 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
I1227 10:10:19.332923 3738115 cli_runner.go:164] Run: docker exec force-systemd-flag-027208 stat /var/lib/dpkg/alternatives/iptables
I1227 10:10:19.398079 3738115 oci.go:144] the created container "force-systemd-flag-027208" has a running status.
I1227 10:10:19.398134 3738115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa...
I1227 10:10:19.979164 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 10:10:19.979299 3738115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 10:10:19.999194 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
I1227 10:10:20.030475 3738115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 10:10:20.030501 3738115 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-027208 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 10:10:20.074535 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
I1227 10:10:20.093820 3738115 machine.go:94] provisionDockerMachine start ...
I1227 10:10:20.093949 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:20.121792 3738115 main.go:144] libmachine: Using SSH client type: native
I1227 10:10:20.122155 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36225 <nil> <nil>}
I1227 10:10:20.122171 3738115 main.go:144] libmachine: About to run SSH command:
hostname
I1227 10:10:20.122773 3738115 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51694->127.0.0.1:36225: read: connection reset by peer
I1227 10:10:23.267068 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
I1227 10:10:23.267094 3738115 ubuntu.go:182] provisioning hostname "force-systemd-flag-027208"
I1227 10:10:23.267161 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:23.286197 3738115 main.go:144] libmachine: Using SSH client type: native
I1227 10:10:23.286515 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36225 <nil> <nil>}
I1227 10:10:23.286534 3738115 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-027208 && echo "force-systemd-flag-027208" | sudo tee /etc/hostname
I1227 10:10:23.437194 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
I1227 10:10:23.437279 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:23.456503 3738115 main.go:144] libmachine: Using SSH client type: native
I1227 10:10:23.456885 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36225 <nil> <nil>}
I1227 10:10:23.456913 3738115 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-027208' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-027208/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-027208' | sudo tee -a /etc/hosts;
fi
fi
I1227 10:10:23.595282 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 10:10:23.595307 3738115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
I1227 10:10:23.595327 3738115 ubuntu.go:190] setting up certificates
I1227 10:10:23.595336 3738115 provision.go:84] configureAuth start
I1227 10:10:23.595398 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
I1227 10:10:23.612849 3738115 provision.go:143] copyHostCerts
I1227 10:10:23.612896 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
I1227 10:10:23.612928 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
I1227 10:10:23.612938 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
I1227 10:10:23.613020 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
I1227 10:10:23.613112 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
I1227 10:10:23.613137 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
I1227 10:10:23.613147 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
I1227 10:10:23.613184 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
I1227 10:10:23.613236 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
I1227 10:10:23.613270 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
I1227 10:10:23.613277 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
I1227 10:10:23.613304 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
I1227 10:10:23.613366 3738115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-027208 san=[127.0.0.1 192.168.85.2 force-systemd-flag-027208 localhost minikube]
I1227 10:10:24.133708 3738115 provision.go:177] copyRemoteCerts
I1227 10:10:24.133787 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 10:10:24.133831 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:24.151314 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
I1227 10:10:24.250894 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 10:10:24.250995 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 10:10:24.269969 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 10:10:24.270032 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 10:10:24.289161 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 10:10:24.289239 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 10:10:24.306849 3738115 provision.go:87] duration metric: took 711.49982ms to configureAuth
I1227 10:10:24.306875 3738115 ubuntu.go:206] setting minikube options for container-runtime
I1227 10:10:24.307072 3738115 config.go:182] Loaded profile config "force-systemd-flag-027208": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 10:10:24.307083 3738115 machine.go:97] duration metric: took 4.213237619s to provisionDockerMachine
I1227 10:10:24.307090 3738115 client.go:176] duration metric: took 9.978658918s to LocalClient.Create
I1227 10:10:24.307107 3738115 start.go:167] duration metric: took 9.978722333s to libmachine.API.Create "force-systemd-flag-027208"
I1227 10:10:24.307114 3738115 start.go:293] postStartSetup for "force-systemd-flag-027208" (driver="docker")
I1227 10:10:24.307122 3738115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 10:10:24.307178 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 10:10:24.307230 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:24.324192 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
I1227 10:10:24.423140 3738115 ssh_runner.go:195] Run: cat /etc/os-release
I1227 10:10:24.426587 3738115 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 10:10:24.426659 3738115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 10:10:24.426678 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
I1227 10:10:24.426739 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
I1227 10:10:24.426819 3738115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
I1227 10:10:24.426834 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /etc/ssl/certs/35331472.pem
I1227 10:10:24.426951 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 10:10:24.434338 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
I1227 10:10:24.452382 3738115 start.go:296] duration metric: took 145.254802ms for postStartSetup
I1227 10:10:24.452762 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
I1227 10:10:24.469668 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
I1227 10:10:24.469957 3738115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 10:10:24.470000 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:24.486890 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
I1227 10:10:24.584309 3738115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 10:10:24.589316 3738115 start.go:128] duration metric: took 10.264593752s to createHost
I1227 10:10:24.589389 3738115 start.go:83] releasing machines lock for "force-systemd-flag-027208", held for 10.264769864s
I1227 10:10:24.589479 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
I1227 10:10:24.607151 3738115 ssh_runner.go:195] Run: cat /version.json
I1227 10:10:24.607216 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:24.607537 3738115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 10:10:24.607594 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
I1227 10:10:24.647065 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
I1227 10:10:24.656060 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
I1227 10:10:24.852332 3738115 ssh_runner.go:195] Run: systemctl --version
I1227 10:10:24.859289 3738115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 10:10:24.863820 3738115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 10:10:24.863935 3738115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 10:10:24.894008 3738115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 10:10:24.894085 3738115 start.go:496] detecting cgroup driver to use...
I1227 10:10:24.894113 3738115 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 10:10:24.894199 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 10:10:24.909955 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 10:10:24.924610 3738115 docker.go:218] disabling cri-docker service (if available) ...
I1227 10:10:24.924679 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 10:10:24.943027 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 10:10:24.962924 3738115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 10:10:25.086519 3738115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 10:10:25.217234 3738115 docker.go:234] disabling docker service ...
I1227 10:10:25.217301 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 10:10:25.239443 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 10:10:25.253469 3738115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 10:10:25.372805 3738115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 10:10:25.502827 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 10:10:25.516102 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 10:10:25.530490 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 10:10:25.539633 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 10:10:25.548981 3738115 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 10:10:25.549107 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 10:10:25.558292 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 10:10:25.567719 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 10:10:25.576955 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 10:10:25.586514 3738115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 10:10:25.594864 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 10:10:25.604220 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 10:10:25.613067 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 10:10:25.621797 3738115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 10:10:25.629270 3738115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 10:10:25.637053 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 10:10:25.760495 3738115 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 10:10:25.897831 3738115 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 10:10:25.897957 3738115 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 10:10:25.901900 3738115 start.go:574] Will wait 60s for crictl version
I1227 10:10:25.902037 3738115 ssh_runner.go:195] Run: which crictl
I1227 10:10:25.905697 3738115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 10:10:25.930207 3738115 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 10:10:25.930328 3738115 ssh_runner.go:195] Run: containerd --version
I1227 10:10:25.954007 3738115 ssh_runner.go:195] Run: containerd --version
I1227 10:10:25.981733 3738115 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 10:10:25.984781 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 10:10:26.000934 3738115 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1227 10:10:26.006285 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 10:10:26.018144 3738115 kubeadm.go:884] updating cluster {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 10:10:26.018261 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:10:26.018337 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 10:10:26.050904 3738115 containerd.go:635] all images are preloaded for containerd runtime.
I1227 10:10:26.050932 3738115 containerd.go:542] Images already preloaded, skipping extraction
I1227 10:10:26.051019 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 10:10:26.077679 3738115 containerd.go:635] all images are preloaded for containerd runtime.
I1227 10:10:26.077700 3738115 cache_images.go:86] Images are preloaded, skipping loading
I1227 10:10:26.077708 3738115 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I1227 10:10:26.077812 3738115 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-027208 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 10:10:26.077878 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 10:10:26.103476 3738115 cni.go:84] Creating CNI manager for ""
I1227 10:10:26.103506 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 10:10:26.103527 3738115 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 10:10:26.103551 3738115 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-027208 NodeName:force-systemd-flag-027208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 10:10:26.103669 3738115 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-027208"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 10:10:26.103747 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 10:10:26.115900 3738115 binaries.go:51] Found k8s binaries, skipping transfer
I1227 10:10:26.115969 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 10:10:26.124889 3738115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1227 10:10:26.139449 3738115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 10:10:26.154050 3738115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1227 10:10:26.169297 3738115 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1227 10:10:26.173915 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 10:10:26.184920 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 10:10:26.302987 3738115 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 10:10:26.319342 3738115 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208 for IP: 192.168.85.2
I1227 10:10:26.319367 3738115 certs.go:195] generating shared ca certs ...
I1227 10:10:26.319382 3738115 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:26.319519 3738115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
I1227 10:10:26.319566 3738115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
I1227 10:10:26.319577 3738115 certs.go:257] generating profile certs ...
I1227 10:10:26.319635 3738115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key
I1227 10:10:26.319659 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt with IP's: []
I1227 10:10:26.459451 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt ...
I1227 10:10:26.459481 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt: {Name:mk84501b4c3d27859a09c7a6cf2970a871461396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:26.459678 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key ...
I1227 10:10:26.459696 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key: {Name:mk2ccf9cd6593ffe591c5f10566441231d2db314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:26.459797 3738115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b
I1227 10:10:26.459816 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1227 10:10:26.619632 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b ...
I1227 10:10:26.619671 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b: {Name:mk45edfe96d665c299603d64f2aab60b1ce255c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:26.619859 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b ...
I1227 10:10:26.619874 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b: {Name:mkbd7ed3b29ae956b5f18bf81df861e3ebc9c0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:26.619963 3738115 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt
I1227 10:10:26.620069 3738115 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key
I1227 10:10:26.620138 3738115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key
I1227 10:10:26.620158 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt with IP's: []
I1227 10:10:27.146672 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt ...
I1227 10:10:27.146707 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt: {Name:mkb638601bcc294803da88d5fdf89e5d664c6575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:27.146874 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key ...
I1227 10:10:27.146889 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key: {Name:mk1275117485033a42422350e6b97f277389ec3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:10:27.146996 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 10:10:27.147022 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 10:10:27.147035 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 10:10:27.147053 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 10:10:27.147065 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 10:10:27.147081 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 10:10:27.147094 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 10:10:27.147106 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 10:10:27.147167 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
W1227 10:10:27.147209 3738115 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
I1227 10:10:27.147220 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
I1227 10:10:27.147257 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
I1227 10:10:27.147286 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
I1227 10:10:27.147309 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
I1227 10:10:27.147356 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
I1227 10:10:27.147392 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 10:10:27.147415 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem -> /usr/share/ca-certificates/3533147.pem
I1227 10:10:27.147433 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /usr/share/ca-certificates/35331472.pem
I1227 10:10:27.147968 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 10:10:27.172281 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 10:10:27.199732 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 10:10:27.218091 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 10:10:27.236726 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 10:10:27.255815 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1227 10:10:27.273210 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 10:10:27.291337 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 10:10:27.309854 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 10:10:27.327812 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
I1227 10:10:27.345068 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
I1227 10:10:27.363093 3738115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 10:10:27.376243 3738115 ssh_runner.go:195] Run: openssl version
I1227 10:10:27.382542 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 10:10:27.390045 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 10:10:27.397770 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 10:10:27.401584 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
I1227 10:10:27.401753 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 10:10:27.442821 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 10:10:27.450555 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 10:10:27.458392 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
I1227 10:10:27.465875 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
I1227 10:10:27.473778 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
I1227 10:10:27.477818 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
I1227 10:10:27.477901 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
I1227 10:10:27.521479 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 10:10:27.529246 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
I1227 10:10:27.537210 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
I1227 10:10:27.545185 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
I1227 10:10:27.553062 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
I1227 10:10:27.557000 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
I1227 10:10:27.557069 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
I1227 10:10:27.598533 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 10:10:27.606100 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
I1227 10:10:27.614561 3738115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 10:10:27.619167 3738115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 10:10:27.619261 3738115 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 10:10:27.619373 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 10:10:27.619454 3738115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 10:10:27.664708 3738115 cri.go:96] found id: ""
I1227 10:10:27.664808 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 10:10:27.676293 3738115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 10:10:27.684601 3738115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:10:27.684711 3738115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:10:27.693229 3738115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:10:27.693252 3738115 kubeadm.go:158] found existing configuration files:
I1227 10:10:27.693326 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:10:27.701375 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:10:27.701465 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:10:27.709152 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:10:27.717622 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:10:27.717691 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:10:27.725649 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:10:27.733875 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:10:27.733981 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:10:27.741583 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:10:27.749413 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:10:27.749491 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:10:27.757332 3738115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:10:27.795629 3738115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:10:27.795779 3738115 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:10:27.898250 3738115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:10:27.898344 3738115 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:10:27.898392 3738115 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:10:27.898440 3738115 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:10:27.898492 3738115 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:10:27.898543 3738115 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:10:27.898594 3738115 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:10:27.898647 3738115 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:10:27.898703 3738115 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:10:27.898753 3738115 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:10:27.898801 3738115 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:10:27.898850 3738115 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:10:27.969995 3738115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:10:27.970212 3738115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:10:27.970343 3738115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:10:27.975838 3738115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:10:27.982447 3738115 out.go:252] - Generating certificates and keys ...
I1227 10:10:27.982636 3738115 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:10:27.982764 3738115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:10:28.179272 3738115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 10:10:28.301146 3738115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 10:10:28.409704 3738115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 10:10:28.575840 3738115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 10:10:28.653265 3738115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 10:10:28.653619 3738115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 10:10:29.172495 3738115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 10:10:29.173136 3738115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 10:10:29.225627 3738115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 10:10:29.920042 3738115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 10:10:30.152507 3738115 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 10:10:30.153337 3738115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:10:30.333897 3738115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:10:30.680029 3738115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:10:30.828481 3738115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:10:30.943020 3738115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:10:31.110010 3738115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:10:31.110883 3738115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:10:31.114899 3738115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:10:31.121179 3738115 out.go:252] - Booting up control plane ...
I1227 10:10:31.121296 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:10:31.121382 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:10:31.121448 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:10:31.138571 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:10:31.139005 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:10:31.146921 3738115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:10:31.147313 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:10:31.147361 3738115 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:10:31.282879 3738115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:10:31.283057 3738115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:14:31.283323 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000806423s
I1227 10:14:31.283368 3738115 kubeadm.go:319]
I1227 10:14:31.283433 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:14:31.283471 3738115 kubeadm.go:319] - The kubelet is not running
I1227 10:14:31.283588 3738115 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:14:31.283597 3738115 kubeadm.go:319]
I1227 10:14:31.283713 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:14:31.283748 3738115 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:14:31.283785 3738115 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:14:31.283793 3738115 kubeadm.go:319]
I1227 10:14:31.288767 3738115 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:14:31.289201 3738115 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:14:31.289312 3738115 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:14:31.289547 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:14:31.289553 3738115 kubeadm.go:319]
I1227 10:14:31.289621 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 10:14:31.289742 3738115 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000806423s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000806423s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1227 10:14:31.289819 3738115 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1227 10:14:31.750300 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 10:14:31.772656 3738115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:14:31.772727 3738115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:14:31.782758 3738115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:14:31.782780 3738115 kubeadm.go:158] found existing configuration files:
I1227 10:14:31.782857 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:14:31.796759 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:14:31.796822 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:14:31.811896 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:14:31.822736 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:14:31.822833 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:14:31.836527 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:14:31.851023 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:14:31.851095 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:14:31.869098 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:14:31.878981 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:14:31.879053 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:14:31.887412 3738115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:14:31.953162 3738115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:14:31.953584 3738115 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:14:32.062092 3738115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:14:32.062172 3738115 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:14:32.062210 3738115 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:14:32.062262 3738115 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:14:32.062316 3738115 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:14:32.062368 3738115 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:14:32.062420 3738115 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:14:32.062472 3738115 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:14:32.062524 3738115 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:14:32.062577 3738115 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:14:32.062630 3738115 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:14:32.062681 3738115 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:14:32.194241 3738115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:14:32.194360 3738115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:14:32.194458 3738115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:14:32.211795 3738115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:14:32.217418 3738115 out.go:252] - Generating certificates and keys ...
I1227 10:14:32.217527 3738115 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:14:32.217602 3738115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:14:32.217685 3738115 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 10:14:32.217751 3738115 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 10:14:32.217828 3738115 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 10:14:32.217893 3738115 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 10:14:32.217964 3738115 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 10:14:32.218035 3738115 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 10:14:32.218115 3738115 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 10:14:32.218192 3738115 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 10:14:32.218234 3738115 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 10:14:32.218297 3738115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:14:32.311471 3738115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:14:32.630411 3738115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:14:32.960523 3738115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:14:33.272670 3738115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:14:33.470189 3738115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:14:33.471343 3738115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:14:33.474434 3738115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:14:33.476605 3738115 out.go:252] - Booting up control plane ...
I1227 10:14:33.476727 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:14:33.478244 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:14:33.479797 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:14:33.509188 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:14:33.510062 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:14:33.526391 3738115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:14:33.526791 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:14:33.531788 3738115 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:14:33.783393 3738115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:14:33.783515 3738115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:18:33.783343 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245188s
I1227 10:18:33.783607 3738115 kubeadm.go:319]
I1227 10:18:33.783674 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:18:33.783709 3738115 kubeadm.go:319] - The kubelet is not running
I1227 10:18:33.783814 3738115 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:18:33.783820 3738115 kubeadm.go:319]
I1227 10:18:33.783924 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:18:33.783956 3738115 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:18:33.783987 3738115 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:18:33.783992 3738115 kubeadm.go:319]
I1227 10:18:33.788224 3738115 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:18:33.788670 3738115 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:18:33.788796 3738115 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:18:33.789043 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:18:33.789054 3738115 kubeadm.go:319]
I1227 10:18:33.789122 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 10:18:33.789185 3738115 kubeadm.go:403] duration metric: took 8m6.169929895s to StartCluster
I1227 10:18:33.789236 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 10:18:33.789303 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 10:18:33.814197 3738115 cri.go:96] found id: ""
I1227 10:18:33.814236 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.814245 3738115 logs.go:284] No container was found matching "kube-apiserver"
I1227 10:18:33.814252 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 10:18:33.814314 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 10:18:33.839019 3738115 cri.go:96] found id: ""
I1227 10:18:33.839043 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.839051 3738115 logs.go:284] No container was found matching "etcd"
I1227 10:18:33.839058 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 10:18:33.839114 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 10:18:33.876385 3738115 cri.go:96] found id: ""
I1227 10:18:33.876414 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.876427 3738115 logs.go:284] No container was found matching "coredns"
I1227 10:18:33.876433 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 10:18:33.876491 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 10:18:33.906761 3738115 cri.go:96] found id: ""
I1227 10:18:33.906788 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.906797 3738115 logs.go:284] No container was found matching "kube-scheduler"
I1227 10:18:33.906803 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 10:18:33.906864 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 10:18:33.935959 3738115 cri.go:96] found id: ""
I1227 10:18:33.935985 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.935994 3738115 logs.go:284] No container was found matching "kube-proxy"
I1227 10:18:33.936000 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 10:18:33.936056 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 10:18:33.960107 3738115 cri.go:96] found id: ""
I1227 10:18:33.960131 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.960143 3738115 logs.go:284] No container was found matching "kube-controller-manager"
I1227 10:18:33.960149 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 10:18:33.960236 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 10:18:33.989273 3738115 cri.go:96] found id: ""
I1227 10:18:33.989300 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.989310 3738115 logs.go:284] No container was found matching "kindnet"
I1227 10:18:33.989356 3738115 logs.go:123] Gathering logs for containerd ...
I1227 10:18:33.989378 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 10:18:34.028316 3738115 logs.go:123] Gathering logs for container status ...
I1227 10:18:34.028366 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 10:18:34.063676 3738115 logs.go:123] Gathering logs for kubelet ...
I1227 10:18:34.063759 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 10:18:34.124368 3738115 logs.go:123] Gathering logs for dmesg ...
I1227 10:18:34.124411 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 10:18:34.139149 3738115 logs.go:123] Gathering logs for describe nodes ...
I1227 10:18:34.139179 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 10:18:34.206064 3738115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:18:34.197603 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.198405 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.199906 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.200420 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.202145 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 10:18:34.197603 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.198405 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.199906 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.200420 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.202145 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1227 10:18:34.206090 3738115 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.206211 3738115 out.go:285] *
*
W1227 10:18:34.206276 3738115 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.206296 3738115 out.go:285] *
*
W1227 10:18:34.206569 3738115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 10:18:34.211278 3738115 out.go:203]
W1227 10:18:34.214075 3738115 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.214127 3738115 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 10:18:34.214153 3738115 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 10:18:34.217184 3738115 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-027208 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:18:34.567922749 +0000 UTC m=+3202.359074628
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-027208
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-027208:
-- stdout --
[
{
"Id": "e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b",
"Created": "2025-12-27T10:10:18.999708699Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3738542,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-27T10:10:19.066236146Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
"ResolvConfPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/hostname",
"HostsPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/hosts",
"LogPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b-json.log",
"Name": "/force-systemd-flag-027208",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-027208:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-027208",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b",
"LowerDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418-init/diff:/var/lib/docker/overlay2/2db3190b649abc62a8f6b3256c95cbe4767892923c34d4bdea0f0debaf7248d8/diff",
"MergedDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/merged",
"UpperDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/diff",
"WorkDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-027208",
"Source": "/var/lib/docker/volumes/force-systemd-flag-027208/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-027208",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-027208",
"name.minikube.sigs.k8s.io": "force-systemd-flag-027208",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "dd507a1d5818f6a99652133c3576c3adb743bed95d49e7435c3a3d4c86b89892",
"SandboxKey": "/var/run/docker/netns/dd507a1d5818",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36225"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36226"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36229"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36227"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36228"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-027208": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "32:10:2c:cc:6e:19",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "c217f0350b8b4e4e3b94001dd4b74a8853abe60a63cc91a348daffa0221690e1",
"EndpointID": "95337a0b7d2da67cb6b1113e7d25e2701a2473a726ceb586fd39a62636c2c6f1",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-027208",
"e0e73dc04b0a"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-027208 -n force-systemd-flag-027208
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-027208 -n force-systemd-flag-027208: exit status 6 (327.260865ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 10:18:34.897989 3767160 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-027208" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-027208 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ delete │ -p cert-options-838902 │ cert-options-838902 │ jenkins │ v1.37.0 │ 27 Dec 25 10:12 UTC │ 27 Dec 25 10:12 UTC │
│ start │ -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:12 UTC │ 27 Dec 25 10:13 UTC │
│ addons │ enable metrics-server -p old-k8s-version-429745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
│ stop │ -p old-k8s-version-429745 --alsologtostderr -v=3 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
│ addons │ enable dashboard -p old-k8s-version-429745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
│ start │ -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
│ image │ old-k8s-version-429745 image list --format=json │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ pause │ -p old-k8s-version-429745 --alsologtostderr -v=1 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ unpause │ -p old-k8s-version-429745 --alsologtostderr -v=1 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ delete │ -p old-k8s-version-429745 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ delete │ -p old-k8s-version-429745 │ old-k8s-version-429745 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ start │ -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
│ addons │ enable metrics-server -p no-preload-878202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
│ stop │ -p no-preload-878202 --alsologtostderr -v=3 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
│ addons │ enable dashboard -p no-preload-878202 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
│ start │ -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
│ image │ no-preload-878202 image list --format=json │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
│ pause │ -p no-preload-878202 --alsologtostderr -v=1 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
│ unpause │ -p no-preload-878202 --alsologtostderr -v=1 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
│ delete │ -p no-preload-878202 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
│ delete │ -p no-preload-878202 │ no-preload-878202 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
│ start │ -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-161350 │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:18 UTC │
│ addons │ enable metrics-server -p embed-certs-161350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-161350 │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │ 27 Dec 25 10:18 UTC │
│ stop │ -p embed-certs-161350 --alsologtostderr -v=3 │ embed-certs-161350 │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │ │
│ ssh │ force-systemd-flag-027208 ssh cat /etc/containerd/config.toml │ force-systemd-flag-027208 │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │ 27 Dec 25 10:18 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/27 10:17:30
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 10:17:30.347749 3763056 out.go:360] Setting OutFile to fd 1 ...
I1227 10:17:30.347947 3763056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 10:17:30.347979 3763056 out.go:374] Setting ErrFile to fd 2...
I1227 10:17:30.348002 3763056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 10:17:30.348425 3763056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 10:17:30.349032 3763056 out.go:368] Setting JSON to false
I1227 10:17:30.349964 3763056 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57603,"bootTime":1766773048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1227 10:17:30.350104 3763056 start.go:143] virtualization:
I1227 10:17:30.354253 3763056 out.go:179] * [embed-certs-161350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 10:17:30.358682 3763056 out.go:179] - MINIKUBE_LOCATION=22343
I1227 10:17:30.358815 3763056 notify.go:221] Checking for updates...
I1227 10:17:30.365184 3763056 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 10:17:30.368285 3763056 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
I1227 10:17:30.371384 3763056 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
I1227 10:17:30.374544 3763056 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 10:17:30.377586 3763056 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 10:17:30.381113 3763056 config.go:182] Loaded profile config "force-systemd-flag-027208": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 10:17:30.381223 3763056 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 10:17:30.413273 3763056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 10:17:30.413402 3763056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 10:17:30.473821 3763056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:17:30.464408377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 10:17:30.473925 3763056 docker.go:319] overlay module found
I1227 10:17:30.477152 3763056 out.go:179] * Using the docker driver based on user configuration
I1227 10:17:30.480028 3763056 start.go:309] selected driver: docker
I1227 10:17:30.480063 3763056 start.go:928] validating driver "docker" against <nil>
I1227 10:17:30.480078 3763056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 10:17:30.480833 3763056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 10:17:30.539494 3763056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:17:30.530333266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 10:17:30.539647 3763056 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 10:17:30.539874 3763056 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 10:17:30.543026 3763056 out.go:179] * Using Docker driver with root privileges
I1227 10:17:30.546000 3763056 cni.go:84] Creating CNI manager for ""
I1227 10:17:30.546080 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 10:17:30.546095 3763056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 10:17:30.546164 3763056 start.go:353] cluster config:
{Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 10:17:30.549368 3763056 out.go:179] * Starting "embed-certs-161350" primary control-plane node in "embed-certs-161350" cluster
I1227 10:17:30.552328 3763056 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 10:17:30.555306 3763056 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 10:17:30.558166 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:17:30.558219 3763056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1227 10:17:30.558234 3763056 cache.go:65] Caching tarball of preloaded images
I1227 10:17:30.558239 3763056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 10:17:30.558317 3763056 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 10:17:30.558327 3763056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1227 10:17:30.558444 3763056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json ...
I1227 10:17:30.558461 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json: {Name:mkeb2d24ed7cd78ac4b9966b3f4e0b1888680eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:30.580454 3763056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 10:17:30.580480 3763056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 10:17:30.580496 3763056 cache.go:243] Successfully downloaded all kic artifacts
I1227 10:17:30.580529 3763056 start.go:360] acquireMachinesLock for embed-certs-161350: {Name:mk5eca3f0e9c960c00971a61d3c4e9d0151a24a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 10:17:30.580648 3763056 start.go:364] duration metric: took 99.739µs to acquireMachinesLock for "embed-certs-161350"
I1227 10:17:30.580680 3763056 start.go:93] Provisioning new machine with config: &{Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 10:17:30.580759 3763056 start.go:125] createHost starting for "" (driver="docker")
I1227 10:17:30.584200 3763056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 10:17:30.584464 3763056 start.go:159] libmachine.API.Create for "embed-certs-161350" (driver="docker")
I1227 10:17:30.584504 3763056 client.go:173] LocalClient.Create starting
I1227 10:17:30.584581 3763056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
I1227 10:17:30.584626 3763056 main.go:144] libmachine: Decoding PEM data...
I1227 10:17:30.584644 3763056 main.go:144] libmachine: Parsing certificate...
I1227 10:17:30.584700 3763056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
I1227 10:17:30.584721 3763056 main.go:144] libmachine: Decoding PEM data...
I1227 10:17:30.584732 3763056 main.go:144] libmachine: Parsing certificate...
I1227 10:17:30.585149 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 10:17:30.601724 3763056 cli_runner.go:211] docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 10:17:30.601827 3763056 network_create.go:284] running [docker network inspect embed-certs-161350] to gather additional debugging logs...
I1227 10:17:30.601849 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350
W1227 10:17:30.620675 3763056 cli_runner.go:211] docker network inspect embed-certs-161350 returned with exit code 1
I1227 10:17:30.620711 3763056 network_create.go:287] error running [docker network inspect embed-certs-161350]: docker network inspect embed-certs-161350: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-161350 not found
I1227 10:17:30.620725 3763056 network_create.go:289] output of [docker network inspect embed-certs-161350]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-161350 not found
** /stderr **
I1227 10:17:30.620831 3763056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 10:17:30.639239 3763056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
I1227 10:17:30.639604 3763056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
I1227 10:17:30.639941 3763056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
I1227 10:17:30.640396 3763056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a0d30}
I1227 10:17:30.640420 3763056 network_create.go:124] attempt to create docker network embed-certs-161350 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1227 10:17:30.640476 3763056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-161350 embed-certs-161350
I1227 10:17:30.708947 3763056 network_create.go:108] docker network embed-certs-161350 192.168.76.0/24 created
I1227 10:17:30.708975 3763056 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-161350" container
I1227 10:17:30.709050 3763056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 10:17:30.725619 3763056 cli_runner.go:164] Run: docker volume create embed-certs-161350 --label name.minikube.sigs.k8s.io=embed-certs-161350 --label created_by.minikube.sigs.k8s.io=true
I1227 10:17:30.744664 3763056 oci.go:103] Successfully created a docker volume embed-certs-161350
I1227 10:17:30.744768 3763056 cli_runner.go:164] Run: docker run --rm --name embed-certs-161350-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-161350 --entrypoint /usr/bin/test -v embed-certs-161350:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 10:17:31.298923 3763056 oci.go:107] Successfully prepared a docker volume embed-certs-161350
I1227 10:17:31.299017 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:17:31.299028 3763056 kic.go:194] Starting extracting preloaded images to volume ...
I1227 10:17:31.299102 3763056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-161350:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 10:17:35.182598 3763056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-161350:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.883439274s)
I1227 10:17:35.182634 3763056 kic.go:203] duration metric: took 3.883603076s to extract preloaded images to volume ...
W1227 10:17:35.182761 3763056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 10:17:35.182889 3763056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 10:17:35.237496 3763056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-161350 --name embed-certs-161350 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-161350 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-161350 --network embed-certs-161350 --ip 192.168.76.2 --volume embed-certs-161350:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 10:17:35.545813 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Running}}
I1227 10:17:35.569873 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:17:35.593009 3763056 cli_runner.go:164] Run: docker exec embed-certs-161350 stat /var/lib/dpkg/alternatives/iptables
I1227 10:17:35.646142 3763056 oci.go:144] the created container "embed-certs-161350" has a running status.
I1227 10:17:35.646169 3763056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa...
I1227 10:17:35.832430 3763056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 10:17:35.861361 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:17:35.890340 3763056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 10:17:35.890375 3763056 kic_runner.go:114] Args: [docker exec --privileged embed-certs-161350 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 10:17:35.952579 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:17:35.977800 3763056 machine.go:94] provisionDockerMachine start ...
I1227 10:17:35.977899 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:36.006612 3763056 main.go:144] libmachine: Using SSH client type: native
I1227 10:17:36.007015 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36255 <nil> <nil>}
I1227 10:17:36.007028 3763056 main.go:144] libmachine: About to run SSH command:
hostname
I1227 10:17:36.007909 3763056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1227 10:17:39.150696 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-161350
I1227 10:17:39.150719 3763056 ubuntu.go:182] provisioning hostname "embed-certs-161350"
I1227 10:17:39.150784 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:39.169295 3763056 main.go:144] libmachine: Using SSH client type: native
I1227 10:17:39.169621 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36255 <nil> <nil>}
I1227 10:17:39.169637 3763056 main.go:144] libmachine: About to run SSH command:
sudo hostname embed-certs-161350 && echo "embed-certs-161350" | sudo tee /etc/hostname
I1227 10:17:39.316441 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-161350
I1227 10:17:39.316528 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:39.334187 3763056 main.go:144] libmachine: Using SSH client type: native
I1227 10:17:39.334505 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 36255 <nil> <nil>}
I1227 10:17:39.334529 3763056 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-161350' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161350/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-161350' | sudo tee -a /etc/hosts;
fi
fi
I1227 10:17:39.475324 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 10:17:39.475348 3763056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
I1227 10:17:39.475367 3763056 ubuntu.go:190] setting up certificates
I1227 10:17:39.475377 3763056 provision.go:84] configureAuth start
I1227 10:17:39.475438 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
I1227 10:17:39.492280 3763056 provision.go:143] copyHostCerts
I1227 10:17:39.492372 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
I1227 10:17:39.492388 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
I1227 10:17:39.492471 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
I1227 10:17:39.492566 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
I1227 10:17:39.492575 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
I1227 10:17:39.492602 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
I1227 10:17:39.492659 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
I1227 10:17:39.492669 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
I1227 10:17:39.492692 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
I1227 10:17:39.492751 3763056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161350 san=[127.0.0.1 192.168.76.2 embed-certs-161350 localhost minikube]
I1227 10:17:39.611352 3763056 provision.go:177] copyRemoteCerts
I1227 10:17:39.611420 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 10:17:39.611463 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:39.629949 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:17:39.735261 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 10:17:39.754155 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1227 10:17:39.773374 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 10:17:39.792319 3763056 provision.go:87] duration metric: took 316.908283ms to configureAuth
I1227 10:17:39.792403 3763056 ubuntu.go:206] setting minikube options for container-runtime
I1227 10:17:39.792651 3763056 config.go:182] Loaded profile config "embed-certs-161350": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 10:17:39.792665 3763056 machine.go:97] duration metric: took 3.814848167s to provisionDockerMachine
I1227 10:17:39.792679 3763056 client.go:176] duration metric: took 9.20815998s to LocalClient.Create
I1227 10:17:39.792702 3763056 start.go:167] duration metric: took 9.208240034s to libmachine.API.Create "embed-certs-161350"
I1227 10:17:39.792710 3763056 start.go:293] postStartSetup for "embed-certs-161350" (driver="docker")
I1227 10:17:39.792724 3763056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 10:17:39.792777 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 10:17:39.792828 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:39.810832 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:17:39.914856 3763056 ssh_runner.go:195] Run: cat /etc/os-release
I1227 10:17:39.918159 3763056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 10:17:39.918190 3763056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 10:17:39.918218 3763056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
I1227 10:17:39.918282 3763056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
I1227 10:17:39.918405 3763056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
I1227 10:17:39.918524 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 10:17:39.926178 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
I1227 10:17:39.943926 3763056 start.go:296] duration metric: took 151.196581ms for postStartSetup
I1227 10:17:39.944325 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
I1227 10:17:39.969361 3763056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json ...
I1227 10:17:39.969645 3763056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 10:17:39.969698 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:39.987648 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:17:40.096457 3763056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 10:17:40.101581 3763056 start.go:128] duration metric: took 9.520806883s to createHost
I1227 10:17:40.101606 3763056 start.go:83] releasing machines lock for "embed-certs-161350", held for 9.520943511s
I1227 10:17:40.101695 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
I1227 10:17:40.120068 3763056 ssh_runner.go:195] Run: cat /version.json
I1227 10:17:40.120130 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:40.120429 3763056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 10:17:40.120492 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:17:40.142048 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:17:40.151472 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:17:40.242627 3763056 ssh_runner.go:195] Run: systemctl --version
I1227 10:17:40.331198 3763056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 10:17:40.335662 3763056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 10:17:40.335758 3763056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 10:17:40.365290 3763056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 10:17:40.365323 3763056 start.go:496] detecting cgroup driver to use...
I1227 10:17:40.365358 3763056 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1227 10:17:40.365423 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 10:17:40.387292 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 10:17:40.402255 3763056 docker.go:218] disabling cri-docker service (if available) ...
I1227 10:17:40.402321 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 10:17:40.420915 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 10:17:40.440045 3763056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 10:17:40.557834 3763056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 10:17:40.683121 3763056 docker.go:234] disabling docker service ...
I1227 10:17:40.683248 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 10:17:40.706381 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 10:17:40.720733 3763056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 10:17:40.843264 3763056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 10:17:40.962442 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 10:17:40.975424 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 10:17:40.989978 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 10:17:40.999123 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 10:17:41.009942 3763056 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I1227 10:17:41.010013 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1227 10:17:41.019526 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 10:17:41.028794 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 10:17:41.037543 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 10:17:41.046669 3763056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 10:17:41.055500 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 10:17:41.065018 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 10:17:41.074377 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 10:17:41.083589 3763056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 10:17:41.091802 3763056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 10:17:41.099594 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 10:17:41.238093 3763056 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 10:17:41.376092 3763056 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 10:17:41.376167 3763056 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 10:17:41.380404 3763056 start.go:574] Will wait 60s for crictl version
I1227 10:17:41.380470 3763056 ssh_runner.go:195] Run: which crictl
I1227 10:17:41.384211 3763056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 10:17:41.408620 3763056 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 10:17:41.408689 3763056 ssh_runner.go:195] Run: containerd --version
I1227 10:17:41.428129 3763056 ssh_runner.go:195] Run: containerd --version
I1227 10:17:41.452537 3763056 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 10:17:41.455600 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 10:17:41.472023 3763056 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1227 10:17:41.475960 3763056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 10:17:41.486181 3763056 kubeadm.go:884] updating cluster {Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 10:17:41.486314 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 10:17:41.486383 3763056 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 10:17:41.511121 3763056 containerd.go:635] all images are preloaded for containerd runtime.
I1227 10:17:41.511147 3763056 containerd.go:542] Images already preloaded, skipping extraction
I1227 10:17:41.511211 3763056 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 10:17:41.539145 3763056 containerd.go:635] all images are preloaded for containerd runtime.
I1227 10:17:41.539169 3763056 cache_images.go:86] Images are preloaded, skipping loading
I1227 10:17:41.539177 3763056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I1227 10:17:41.539266 3763056 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 10:17:41.539337 3763056 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 10:17:41.565095 3763056 cni.go:84] Creating CNI manager for ""
I1227 10:17:41.565120 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 10:17:41.565138 3763056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 10:17:41.565161 3763056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161350 NodeName:embed-certs-161350 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 10:17:41.565283 3763056 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-161350"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 10:17:41.565371 3763056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 10:17:41.573458 3763056 binaries.go:51] Found k8s binaries, skipping transfer
I1227 10:17:41.573530 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 10:17:41.581416 3763056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1227 10:17:41.594392 3763056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 10:17:41.607827 3763056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
I1227 10:17:41.620708 3763056 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1227 10:17:41.624397 3763056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 10:17:41.634154 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 10:17:41.745796 3763056 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 10:17:41.763235 3763056 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350 for IP: 192.168.76.2
I1227 10:17:41.763259 3763056 certs.go:195] generating shared ca certs ...
I1227 10:17:41.763274 3763056 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:41.763428 3763056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
I1227 10:17:41.763493 3763056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
I1227 10:17:41.763506 3763056 certs.go:257] generating profile certs ...
I1227 10:17:41.763579 3763056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key
I1227 10:17:41.763595 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt with IP's: []
I1227 10:17:42.280574 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt ...
I1227 10:17:42.280617 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt: {Name:mk8667852cc806cc3165c03c25c3a212a68f8de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:42.280860 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key ...
I1227 10:17:42.280877 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key: {Name:mkb10810d7a2b7f61b39f4261e8426c92f955a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:42.280987 3763056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d
I1227 10:17:42.281010 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1227 10:17:42.434583 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d ...
I1227 10:17:42.434616 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d: {Name:mk25aa3cc165e5dd0e3336aee06656ae79b623b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:42.434802 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d ...
I1227 10:17:42.434819 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d: {Name:mk19869ca165d1e9be82068dd967222d69549cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:42.434918 3763056 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt
I1227 10:17:42.435015 3763056 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key
I1227 10:17:42.435076 3763056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key
I1227 10:17:42.435095 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt with IP's: []
I1227 10:17:43.180975 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt ...
I1227 10:17:43.181014 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt: {Name:mk6c432d3dba85ffdb00efb19ccf25436337b3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:43.181220 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key ...
I1227 10:17:43.181238 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key: {Name:mk02f27db2357d7ab70a1eb701b073ee8b3df705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:17:43.181445 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
W1227 10:17:43.181495 3763056 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
I1227 10:17:43.181508 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
I1227 10:17:43.181535 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
I1227 10:17:43.181565 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
I1227 10:17:43.181594 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
I1227 10:17:43.181642 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
I1227 10:17:43.182266 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 10:17:43.202786 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 10:17:43.222708 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 10:17:43.241462 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 10:17:43.260144 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1227 10:17:43.278243 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 10:17:43.296663 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 10:17:43.314521 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 10:17:43.332147 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
I1227 10:17:43.349931 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
I1227 10:17:43.371343 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 10:17:43.390696 3763056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 10:17:43.405553 3763056 ssh_runner.go:195] Run: openssl version
I1227 10:17:43.412623 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
I1227 10:17:43.420280 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
I1227 10:17:43.427885 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
I1227 10:17:43.431482 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
I1227 10:17:43.431544 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
I1227 10:17:43.474604 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 10:17:43.482052 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
I1227 10:17:43.489207 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 10:17:43.496409 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 10:17:43.503991 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 10:17:43.508047 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
I1227 10:17:43.508111 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 10:17:43.551068 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 10:17:43.558603 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 10:17:43.566114 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
I1227 10:17:43.574370 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
I1227 10:17:43.582359 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
I1227 10:17:43.586190 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
I1227 10:17:43.586266 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
I1227 10:17:43.627776 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 10:17:43.635555 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
I1227 10:17:43.643226 3763056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 10:17:43.647640 3763056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 10:17:43.647743 3763056 kubeadm.go:401] StartCluster: {Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 10:17:43.647893 3763056 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 10:17:43.647987 3763056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 10:17:43.680131 3763056 cri.go:96] found id: ""
I1227 10:17:43.680249 3763056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 10:17:43.688284 3763056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 10:17:43.698322 3763056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:17:43.698415 3763056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:17:43.706536 3763056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:17:43.706565 3763056 kubeadm.go:158] found existing configuration files:
I1227 10:17:43.706618 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:17:43.714465 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:17:43.714539 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:17:43.722341 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:17:43.730458 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:17:43.730544 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:17:43.738522 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:17:43.746458 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:17:43.746578 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:17:43.754004 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:17:43.761683 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:17:43.761754 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:17:43.769356 3763056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:17:43.809938 3763056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:17:43.810004 3763056 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:17:43.888039 3763056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:17:43.888115 3763056 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:17:43.888161 3763056 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:17:43.888209 3763056 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:17:43.888258 3763056 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:17:43.888308 3763056 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:17:43.888357 3763056 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:17:43.888417 3763056 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:17:43.888467 3763056 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:17:43.888513 3763056 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:17:43.888563 3763056 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:17:43.888611 3763056 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:17:43.955360 3763056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:17:43.955477 3763056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:17:43.955574 3763056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:17:43.961245 3763056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:17:43.968297 3763056 out.go:252] - Generating certificates and keys ...
I1227 10:17:43.968464 3763056 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:17:43.968571 3763056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:17:44.151759 3763056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 10:17:44.437427 3763056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 10:17:45.114605 3763056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 10:17:45.275788 3763056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 10:17:45.342233 3763056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 10:17:45.342401 3763056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-161350 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 10:17:45.566891 3763056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 10:17:45.567053 3763056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-161350 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 10:17:45.841396 3763056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 10:17:46.006212 3763056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 10:17:46.378329 3763056 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 10:17:46.378873 3763056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:17:46.800316 3763056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:17:46.925551 3763056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:17:47.032542 3763056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:17:47.520482 3763056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:17:47.810676 3763056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:17:47.811446 3763056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:17:47.814237 3763056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:17:47.818038 3763056 out.go:252] - Booting up control plane ...
I1227 10:17:47.818154 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:17:47.818245 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:17:47.818318 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:17:47.843318 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:17:47.843477 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:17:47.851356 3763056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:17:47.851462 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:17:47.851526 3763056 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:17:47.979462 3763056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:17:47.979582 3763056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:17:48.477634 3763056 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.78878ms
I1227 10:17:48.481456 3763056 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1227 10:17:48.481548 3763056 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I1227 10:17:48.481632 3763056 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1227 10:17:48.481706 3763056 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1227 10:17:50.990744 3763056 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.508846054s
I1227 10:17:52.462168 3763056 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.980663632s
I1227 10:17:54.483177 3763056 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001496484s
I1227 10:17:54.522743 3763056 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1227 10:17:54.536550 3763056 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1227 10:17:54.554512 3763056 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1227 10:17:54.554751 3763056 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-161350 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1227 10:17:54.566778 3763056 kubeadm.go:319] [bootstrap-token] Using token: y9hbid.csa875mwzjt6ay1x
I1227 10:17:54.569703 3763056 out.go:252] - Configuring RBAC rules ...
I1227 10:17:54.569838 3763056 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1227 10:17:54.573701 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1227 10:17:54.582852 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1227 10:17:54.589360 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1227 10:17:54.593518 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1227 10:17:54.597707 3763056 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1227 10:17:54.890327 3763056 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1227 10:17:55.319638 3763056 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1227 10:17:55.890567 3763056 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1227 10:17:55.891775 3763056 kubeadm.go:319]
I1227 10:17:55.891850 3763056 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1227 10:17:55.891860 3763056 kubeadm.go:319]
I1227 10:17:55.891934 3763056 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1227 10:17:55.891940 3763056 kubeadm.go:319]
I1227 10:17:55.891964 3763056 kubeadm.go:319] mkdir -p $HOME/.kube
I1227 10:17:55.892027 3763056 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1227 10:17:55.892080 3763056 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1227 10:17:55.892087 3763056 kubeadm.go:319]
I1227 10:17:55.892139 3763056 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1227 10:17:55.892148 3763056 kubeadm.go:319]
I1227 10:17:55.892199 3763056 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1227 10:17:55.892208 3763056 kubeadm.go:319]
I1227 10:17:55.892264 3763056 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1227 10:17:55.892346 3763056 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1227 10:17:55.892414 3763056 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1227 10:17:55.892422 3763056 kubeadm.go:319]
I1227 10:17:55.892501 3763056 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1227 10:17:55.892582 3763056 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1227 10:17:55.892591 3763056 kubeadm.go:319]
I1227 10:17:55.892670 3763056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y9hbid.csa875mwzjt6ay1x \
I1227 10:17:55.892769 3763056 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:847679729b653704be851a5daf5af83009c664cd52aa150e19612857eea3005b \
I1227 10:17:55.892792 3763056 kubeadm.go:319] --control-plane
I1227 10:17:55.892800 3763056 kubeadm.go:319]
I1227 10:17:55.892880 3763056 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1227 10:17:55.892887 3763056 kubeadm.go:319]
I1227 10:17:55.892964 3763056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y9hbid.csa875mwzjt6ay1x \
I1227 10:17:55.893064 3763056 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:847679729b653704be851a5daf5af83009c664cd52aa150e19612857eea3005b
I1227 10:17:55.896467 3763056 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:17:55.896884 3763056 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:17:55.896997 3763056 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:17:55.897014 3763056 cni.go:84] Creating CNI manager for ""
I1227 10:17:55.897025 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 10:17:55.900100 3763056 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1227 10:17:55.903019 3763056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1227 10:17:55.907085 3763056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I1227 10:17:55.907102 3763056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I1227 10:17:55.922216 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1227 10:17:56.210219 3763056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1227 10:17:56.210355 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:56.210430 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161350 minikube.k8s.io/updated_at=2025_12_27T10_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=embed-certs-161350 minikube.k8s.io/primary=true
I1227 10:17:56.462262 3763056 ops.go:34] apiserver oom_adj: -16
I1227 10:17:56.462387 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:56.962490 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:57.463417 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:57.962540 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:58.463122 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:58.963368 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:59.463030 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:17:59.962667 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:18:00.463349 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1227 10:18:00.565700 3763056 kubeadm.go:1114] duration metric: took 4.355394718s to wait for elevateKubeSystemPrivileges
I1227 10:18:00.565730 3763056 kubeadm.go:403] duration metric: took 16.917992409s to StartCluster
I1227 10:18:00.565748 3763056 settings.go:142] acquiring lock: {Name:mk370c624a4706fdf792a8bb308be4364bde23af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:18:00.565823 3763056 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22343-3531265/kubeconfig
I1227 10:18:00.566814 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/kubeconfig: {Name:mkc7143ac5be1b7104ba62728484394431aded08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 10:18:00.567070 3763056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 10:18:00.567176 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1227 10:18:00.567446 3763056 config.go:182] Loaded profile config "embed-certs-161350": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 10:18:00.567496 3763056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1227 10:18:00.567554 3763056 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-161350"
I1227 10:18:00.567569 3763056 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-161350"
I1227 10:18:00.567592 3763056 host.go:66] Checking if "embed-certs-161350" exists ...
I1227 10:18:00.568160 3763056 addons.go:70] Setting default-storageclass=true in profile "embed-certs-161350"
I1227 10:18:00.568206 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:18:00.568214 3763056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161350"
I1227 10:18:00.568561 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:18:00.572294 3763056 out.go:179] * Verifying Kubernetes components...
I1227 10:18:00.575564 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 10:18:00.607362 3763056 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1227 10:18:00.611199 3763056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1227 10:18:00.611226 3763056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1227 10:18:00.611295 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:18:00.616002 3763056 addons.go:239] Setting addon default-storageclass=true in "embed-certs-161350"
I1227 10:18:00.616045 3763056 host.go:66] Checking if "embed-certs-161350" exists ...
I1227 10:18:00.616504 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
I1227 10:18:00.651806 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:18:00.652481 3763056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1227 10:18:00.652495 3763056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1227 10:18:00.652553 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
I1227 10:18:00.678394 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
I1227 10:18:00.925504 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1227 10:18:00.990014 3763056 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 10:18:01.008071 3763056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 10:18:01.022079 3763056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1227 10:18:01.642442 3763056 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I1227 10:18:01.643520 3763056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161350" to be "Ready" ...
I1227 10:18:02.010986 3763056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002824871s)
I1227 10:18:02.025859 3763056 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1227 10:18:02.028996 3763056 addons.go:530] duration metric: took 1.461484918s for enable addons: enabled=[storage-provisioner default-storageclass]
I1227 10:18:02.150618 3763056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-161350" context rescaled to 1 replicas
W1227 10:18:03.648594 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
W1227 10:18:06.147768 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
W1227 10:18:08.647536 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
W1227 10:18:10.648659 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
W1227 10:18:13.147866 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
I1227 10:18:13.648106 3763056 node_ready.go:49] node "embed-certs-161350" is "Ready"
I1227 10:18:13.648134 3763056 node_ready.go:38] duration metric: took 12.003385326s for node "embed-certs-161350" to be "Ready" ...
I1227 10:18:13.648147 3763056 api_server.go:52] waiting for apiserver process to appear ...
I1227 10:18:13.648207 3763056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1227 10:18:13.674165 3763056 api_server.go:72] duration metric: took 13.107053095s to wait for apiserver process to appear ...
I1227 10:18:13.674189 3763056 api_server.go:88] waiting for apiserver healthz status ...
I1227 10:18:13.674209 3763056 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1227 10:18:13.683041 3763056 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
ok
I1227 10:18:13.684226 3763056 api_server.go:141] control plane version: v1.35.0
I1227 10:18:13.684250 3763056 api_server.go:131] duration metric: took 10.053774ms to wait for apiserver health ...
I1227 10:18:13.684260 3763056 system_pods.go:43] waiting for kube-system pods to appear ...
I1227 10:18:13.687246 3763056 system_pods.go:59] 8 kube-system pods found
I1227 10:18:13.687279 3763056 system_pods.go:61] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 10:18:13.687288 3763056 system_pods.go:61] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 10:18:13.687294 3763056 system_pods.go:61] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
I1227 10:18:13.687299 3763056 system_pods.go:61] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
I1227 10:18:13.687306 3763056 system_pods.go:61] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 10:18:13.687310 3763056 system_pods.go:61] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
I1227 10:18:13.687315 3763056 system_pods.go:61] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
I1227 10:18:13.687321 3763056 system_pods.go:61] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 10:18:13.687330 3763056 system_pods.go:74] duration metric: took 3.064327ms to wait for pod list to return data ...
I1227 10:18:13.687338 3763056 default_sa.go:34] waiting for default service account to be created ...
I1227 10:18:13.693672 3763056 default_sa.go:45] found service account: "default"
I1227 10:18:13.693696 3763056 default_sa.go:55] duration metric: took 6.352245ms for default service account to be created ...
I1227 10:18:13.693707 3763056 system_pods.go:116] waiting for k8s-apps to be running ...
I1227 10:18:13.698133 3763056 system_pods.go:86] 8 kube-system pods found
I1227 10:18:13.698171 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 10:18:13.698180 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 10:18:13.698187 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
I1227 10:18:13.698192 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
I1227 10:18:13.698200 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 10:18:13.698205 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
I1227 10:18:13.698210 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
I1227 10:18:13.698216 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 10:18:13.698246 3763056 retry.go:84] will retry after 200ms: missing components: kube-dns
I1227 10:18:13.949331 3763056 system_pods.go:86] 8 kube-system pods found
I1227 10:18:13.949424 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 10:18:13.949449 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 10:18:13.949499 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
I1227 10:18:13.949533 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
I1227 10:18:13.949558 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 10:18:13.949580 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
I1227 10:18:13.949614 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
I1227 10:18:13.949659 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 10:18:14.253262 3763056 system_pods.go:86] 8 kube-system pods found
I1227 10:18:14.253300 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 10:18:14.253310 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 10:18:14.253317 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
I1227 10:18:14.253323 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
I1227 10:18:14.253330 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 10:18:14.253336 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
I1227 10:18:14.253341 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
I1227 10:18:14.253348 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1227 10:18:14.708556 3763056 system_pods.go:86] 8 kube-system pods found
I1227 10:18:14.708592 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Running
I1227 10:18:14.708603 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 10:18:14.708639 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
I1227 10:18:14.708653 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
I1227 10:18:14.708662 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 10:18:14.708668 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
I1227 10:18:14.708678 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
I1227 10:18:14.708683 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Running
I1227 10:18:14.708708 3763056 system_pods.go:126] duration metric: took 1.014981567s to wait for k8s-apps to be running ...
I1227 10:18:14.708721 3763056 system_svc.go:44] waiting for kubelet service to be running ....
I1227 10:18:14.708791 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 10:18:14.722076 3763056 system_svc.go:56] duration metric: took 13.345565ms WaitForService to wait for kubelet
I1227 10:18:14.722107 3763056 kubeadm.go:587] duration metric: took 14.155000619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 10:18:14.722144 3763056 node_conditions.go:102] verifying NodePressure condition ...
I1227 10:18:14.725242 3763056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1227 10:18:14.725285 3763056 node_conditions.go:123] node cpu capacity is 2
I1227 10:18:14.725300 3763056 node_conditions.go:105] duration metric: took 3.144817ms to run NodePressure ...
I1227 10:18:14.725333 3763056 start.go:242] waiting for startup goroutines ...
I1227 10:18:14.725346 3763056 start.go:247] waiting for cluster config update ...
I1227 10:18:14.725358 3763056 start.go:256] writing updated cluster config ...
I1227 10:18:14.725652 3763056 ssh_runner.go:195] Run: rm -f paused
I1227 10:18:14.729430 3763056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1227 10:18:14.733217 3763056 pod_ready.go:83] waiting for pod "coredns-7d764666f9-f6v7w" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:14.737752 3763056 pod_ready.go:94] pod "coredns-7d764666f9-f6v7w" is "Ready"
I1227 10:18:14.737781 3763056 pod_ready.go:86] duration metric: took 4.534384ms for pod "coredns-7d764666f9-f6v7w" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:14.740043 3763056 pod_ready.go:83] waiting for pod "etcd-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:15.745895 3763056 pod_ready.go:94] pod "etcd-embed-certs-161350" is "Ready"
I1227 10:18:15.745925 3763056 pod_ready.go:86] duration metric: took 1.005854401s for pod "etcd-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:15.748355 3763056 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:15.753044 3763056 pod_ready.go:94] pod "kube-apiserver-embed-certs-161350" is "Ready"
I1227 10:18:15.753074 3763056 pod_ready.go:86] duration metric: took 4.692993ms for pod "kube-apiserver-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:15.755574 3763056 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:15.933981 3763056 pod_ready.go:94] pod "kube-controller-manager-embed-certs-161350" is "Ready"
I1227 10:18:15.934010 3763056 pod_ready.go:86] duration metric: took 178.399759ms for pod "kube-controller-manager-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:16.134401 3763056 pod_ready.go:83] waiting for pod "kube-proxy-snglb" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:16.533900 3763056 pod_ready.go:94] pod "kube-proxy-snglb" is "Ready"
I1227 10:18:16.533926 3763056 pod_ready.go:86] duration metric: took 399.495422ms for pod "kube-proxy-snglb" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:16.734055 3763056 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:17.134038 3763056 pod_ready.go:94] pod "kube-scheduler-embed-certs-161350" is "Ready"
I1227 10:18:17.134073 3763056 pod_ready.go:86] duration metric: took 399.980268ms for pod "kube-scheduler-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
I1227 10:18:17.134087 3763056 pod_ready.go:40] duration metric: took 2.404624993s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1227 10:18:17.189411 3763056 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I1227 10:18:17.192720 3763056 out.go:203]
W1227 10:18:17.195698 3763056 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
I1227 10:18:17.198715 3763056 out.go:179] - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
I1227 10:18:17.202513 3763056 out.go:179] * Done! kubectl is now configured to use "embed-certs-161350" cluster and "default" namespace by default
I1227 10:18:33.783343 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245188s
I1227 10:18:33.783607 3738115 kubeadm.go:319]
I1227 10:18:33.783674 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:18:33.783709 3738115 kubeadm.go:319] - The kubelet is not running
I1227 10:18:33.783814 3738115 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:18:33.783820 3738115 kubeadm.go:319]
I1227 10:18:33.783924 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:18:33.783956 3738115 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:18:33.783987 3738115 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:18:33.783992 3738115 kubeadm.go:319]
I1227 10:18:33.788224 3738115 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:18:33.788670 3738115 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:18:33.788796 3738115 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:18:33.789043 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:18:33.789054 3738115 kubeadm.go:319]
I1227 10:18:33.789122 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 10:18:33.789185 3738115 kubeadm.go:403] duration metric: took 8m6.169929895s to StartCluster
I1227 10:18:33.789236 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 10:18:33.789303 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 10:18:33.814197 3738115 cri.go:96] found id: ""
I1227 10:18:33.814236 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.814245 3738115 logs.go:284] No container was found matching "kube-apiserver"
I1227 10:18:33.814252 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 10:18:33.814314 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 10:18:33.839019 3738115 cri.go:96] found id: ""
I1227 10:18:33.839043 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.839051 3738115 logs.go:284] No container was found matching "etcd"
I1227 10:18:33.839058 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 10:18:33.839114 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 10:18:33.876385 3738115 cri.go:96] found id: ""
I1227 10:18:33.876414 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.876427 3738115 logs.go:284] No container was found matching "coredns"
I1227 10:18:33.876433 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 10:18:33.876491 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 10:18:33.906761 3738115 cri.go:96] found id: ""
I1227 10:18:33.906788 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.906797 3738115 logs.go:284] No container was found matching "kube-scheduler"
I1227 10:18:33.906803 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 10:18:33.906864 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 10:18:33.935959 3738115 cri.go:96] found id: ""
I1227 10:18:33.935985 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.935994 3738115 logs.go:284] No container was found matching "kube-proxy"
I1227 10:18:33.936000 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 10:18:33.936056 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 10:18:33.960107 3738115 cri.go:96] found id: ""
I1227 10:18:33.960131 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.960143 3738115 logs.go:284] No container was found matching "kube-controller-manager"
I1227 10:18:33.960149 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 10:18:33.960236 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 10:18:33.989273 3738115 cri.go:96] found id: ""
I1227 10:18:33.989300 3738115 logs.go:282] 0 containers: []
W1227 10:18:33.989310 3738115 logs.go:284] No container was found matching "kindnet"
I1227 10:18:33.989356 3738115 logs.go:123] Gathering logs for containerd ...
I1227 10:18:33.989378 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 10:18:34.028316 3738115 logs.go:123] Gathering logs for container status ...
I1227 10:18:34.028366 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 10:18:34.063676 3738115 logs.go:123] Gathering logs for kubelet ...
I1227 10:18:34.063759 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 10:18:34.124368 3738115 logs.go:123] Gathering logs for dmesg ...
I1227 10:18:34.124411 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 10:18:34.139149 3738115 logs.go:123] Gathering logs for describe nodes ...
I1227 10:18:34.139179 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 10:18:34.206064 3738115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:18:34.197603 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.198405 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.199906 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.200420 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.202145 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 10:18:34.197603 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.198405 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.199906 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.200420 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:34.202145 4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1227 10:18:34.206090 3738115 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.206211 3738115 out.go:285] *
W1227 10:18:34.206276 3738115 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.206296 3738115 out.go:285] *
W1227 10:18:34.206569 3738115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 10:18:34.211278 3738115 out.go:203]
W1227 10:18:34.214075 3738115 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000245188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:18:34.214127 3738115 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 10:18:34.214153 3738115 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 10:18:34.217184 3738115 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834284373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834306880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834362879Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834389085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834404781Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834421216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834434811Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834447373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834468690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834516156Z" level=info msg="Connect containerd service"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.835003407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.836180697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855375575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855442552Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855480869Z" level=info msg="Start subscribing containerd event"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855530550Z" level=info msg="Start recovering state"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894697193Z" level=info msg="Start event monitor"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894747596Z" level=info msg="Start cni network conf syncer for default"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894758041Z" level=info msg="Start streaming server"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894768084Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894781663Z" level=info msg="runtime interface starting up..."
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894790336Z" level=info msg="starting plugins..."
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894806262Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894993819Z" level=info msg="containerd successfully booted in 0.082110s"
Dec 27 10:10:25 force-systemd-flag-027208 systemd[1]: Started containerd.service - containerd container runtime.
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:18:35.546559 4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:35.547192 4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:35.548864 4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:35.549437 4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:18:35.551145 4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec27 09:24] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
10:18:35 up 16:01, 0 user, load average: 3.10, 2.42, 2.26
Linux force-systemd-flag-027208 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 27 10:18:32 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:33 force-systemd-flag-027208 kubelet[4774]: E1227 10:18:33.154055 4774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:33 force-systemd-flag-027208 kubelet[4801]: E1227 10:18:33.924848 4801 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:34 force-systemd-flag-027208 kubelet[4873]: E1227 10:18:34.681089 4873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:18:35 force-systemd-flag-027208 kubelet[4948]: E1227 10:18:35.423315 4948 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-027208 -n force-systemd-flag-027208
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-027208 -n force-systemd-flag-027208: exit status 6 (318.717771ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 10:18:35.983396 3767379 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-027208" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-027208" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-027208" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-027208
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-027208: (1.969364461s)
--- FAIL: TestForceSystemdFlag (503.95s)