=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E0111 08:12:40.285571 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:13:00.476718 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m21.490172261s)
-- stdout --
* [force-systemd-flag-610060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22402
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-610060" primary control-plane node in "force-systemd-flag-610060" cluster
* Pulling base image v0.0.48-1768032998-22402 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I0111 08:11:45.966483 3329885 out.go:360] Setting OutFile to fd 1 ...
I0111 08:11:45.966703 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:11:45.966732 3329885 out.go:374] Setting ErrFile to fd 2...
I0111 08:11:45.966751 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:11:45.967177 3329885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 08:11:45.968023 3329885 out.go:368] Setting JSON to false
I0111 08:11:45.968908 3329885 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":50057,"bootTime":1768069049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0111 08:11:45.968983 3329885 start.go:143] virtualization:
I0111 08:11:45.972677 3329885 out.go:179] * [force-systemd-flag-610060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0111 08:11:45.977345 3329885 out.go:179] - MINIKUBE_LOCATION=22402
I0111 08:11:45.977453 3329885 notify.go:221] Checking for updates...
I0111 08:11:45.984099 3329885 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0111 08:11:45.987358 3329885 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
I0111 08:11:45.990611 3329885 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
I0111 08:11:45.993730 3329885 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0111 08:11:45.996854 3329885 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0111 08:11:46.002916 3329885 config.go:182] Loaded profile config "force-systemd-env-305397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 08:11:46.003074 3329885 driver.go:422] Setting default libvirt URI to qemu:///system
I0111 08:11:46.034142 3329885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0111 08:11:46.034275 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:11:46.125120 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.113366797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:11:46.125226 3329885 docker.go:319] overlay module found
I0111 08:11:46.128633 3329885 out.go:179] * Using the docker driver based on user configuration
I0111 08:11:46.131564 3329885 start.go:309] selected driver: docker
I0111 08:11:46.131591 3329885 start.go:928] validating driver "docker" against <nil>
I0111 08:11:46.131605 3329885 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0111 08:11:46.132458 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:11:46.188583 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.179395708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:11:46.188739 3329885 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0111 08:11:46.188960 3329885 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I0111 08:11:46.191962 3329885 out.go:179] * Using Docker driver with root privileges
I0111 08:11:46.194890 3329885 cni.go:84] Creating CNI manager for ""
I0111 08:11:46.194959 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0111 08:11:46.194975 3329885 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I0111 08:11:46.195053 3329885 start.go:353] cluster config:
{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:11:46.200121 3329885 out.go:179] * Starting "force-systemd-flag-610060" primary control-plane node in "force-systemd-flag-610060" cluster
I0111 08:11:46.203055 3329885 cache.go:134] Beginning downloading kic base image for docker with containerd
I0111 08:11:46.206054 3329885 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
I0111 08:11:46.208898 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:11:46.208958 3329885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I0111 08:11:46.208971 3329885 cache.go:65] Caching tarball of preloaded images
I0111 08:11:46.208985 3329885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
I0111 08:11:46.209059 3329885 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0111 08:11:46.209070 3329885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I0111 08:11:46.209177 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
I0111 08:11:46.209198 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json: {Name:mke00c980f6aa6c98163914c28e2b3a0179313f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:46.228792 3329885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
I0111 08:11:46.228814 3329885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
I0111 08:11:46.228829 3329885 cache.go:243] Successfully downloaded all kic artifacts
I0111 08:11:46.228857 3329885 start.go:360] acquireMachinesLock for force-systemd-flag-610060: {Name:mk7b285d446b288e2ef1025bb5bf30ad660e990b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0111 08:11:46.228963 3329885 start.go:364] duration metric: took 84.946µs to acquireMachinesLock for "force-systemd-flag-610060"
I0111 08:11:46.228995 3329885 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0111 08:11:46.229072 3329885 start.go:125] createHost starting for "" (driver="docker")
I0111 08:11:46.232524 3329885 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0111 08:11:46.232749 3329885 start.go:159] libmachine.API.Create for "force-systemd-flag-610060" (driver="docker")
I0111 08:11:46.232785 3329885 client.go:173] LocalClient.Create starting
I0111 08:11:46.232857 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
I0111 08:11:46.232894 3329885 main.go:144] libmachine: Decoding PEM data...
I0111 08:11:46.232913 3329885 main.go:144] libmachine: Parsing certificate...
I0111 08:11:46.232970 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
I0111 08:11:46.232992 3329885 main.go:144] libmachine: Decoding PEM data...
I0111 08:11:46.233007 3329885 main.go:144] libmachine: Parsing certificate...
I0111 08:11:46.233367 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 08:11:46.250050 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 08:11:46.250150 3329885 network_create.go:284] running [docker network inspect force-systemd-flag-610060] to gather additional debugging logs...
I0111 08:11:46.250170 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060
W0111 08:11:46.264851 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 returned with exit code 1
I0111 08:11:46.264883 3329885 network_create.go:287] error running [docker network inspect force-systemd-flag-610060]: docker network inspect force-systemd-flag-610060: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-610060 not found
I0111 08:11:46.264896 3329885 network_create.go:289] output of [docker network inspect force-systemd-flag-610060]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-610060 not found
** /stderr **
I0111 08:11:46.265009 3329885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:11:46.281585 3329885 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
I0111 08:11:46.281997 3329885 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
I0111 08:11:46.282212 3329885 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
I0111 08:11:46.282485 3329885 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9455289443b5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:d1:66:6a:84:dd} reservation:<nil>}
I0111 08:11:46.282935 3329885 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a129c0}
I0111 08:11:46.282958 3329885 network_create.go:124] attempt to create docker network force-systemd-flag-610060 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0111 08:11:46.283014 3329885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-610060 force-systemd-flag-610060
I0111 08:11:46.338524 3329885 network_create.go:108] docker network force-systemd-flag-610060 192.168.85.0/24 created
I0111 08:11:46.338555 3329885 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-610060" container
I0111 08:11:46.338639 3329885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0111 08:11:46.354768 3329885 cli_runner.go:164] Run: docker volume create force-systemd-flag-610060 --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true
I0111 08:11:46.372694 3329885 oci.go:103] Successfully created a docker volume force-systemd-flag-610060
I0111 08:11:46.372798 3329885 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-610060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --entrypoint /usr/bin/test -v force-systemd-flag-610060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
I0111 08:11:46.921885 3329885 oci.go:107] Successfully prepared a docker volume force-systemd-flag-610060
I0111 08:11:46.921940 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:11:46.921951 3329885 kic.go:194] Starting extracting preloaded images to volume ...
I0111 08:11:46.922032 3329885 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
I0111 08:11:50.731187 3329885 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.809106983s)
I0111 08:11:50.731222 3329885 kic.go:203] duration metric: took 3.80926748s to extract preloaded images to volume ...
W0111 08:11:50.731361 3329885 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0111 08:11:50.731477 3329885 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0111 08:11:50.797692 3329885 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-610060 --name force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-610060 --network force-systemd-flag-610060 --ip 192.168.85.2 --volume force-systemd-flag-610060:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
I0111 08:11:51.110888 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Running}}
I0111 08:11:51.136837 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
I0111 08:11:51.165956 3329885 cli_runner.go:164] Run: docker exec force-systemd-flag-610060 stat /var/lib/dpkg/alternatives/iptables
I0111 08:11:51.215991 3329885 oci.go:144] the created container "force-systemd-flag-610060" has a running status.
I0111 08:11:51.216037 3329885 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa...
I0111 08:11:51.516534 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0111 08:11:51.516633 3329885 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0111 08:11:51.539007 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
I0111 08:11:51.567105 3329885 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0111 08:11:51.567123 3329885 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-610060 chown docker:docker /home/docker/.ssh/authorized_keys]
I0111 08:11:51.645455 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
I0111 08:11:51.680580 3329885 machine.go:94] provisionDockerMachine start ...
I0111 08:11:51.680675 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:51.710716 3329885 main.go:144] libmachine: Using SSH client type: native
I0111 08:11:51.711064 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35813 <nil> <nil>}
I0111 08:11:51.711073 3329885 main.go:144] libmachine: About to run SSH command:
hostname
I0111 08:11:51.711854 3329885 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0111 08:11:54.859728 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
I0111 08:11:54.859755 3329885 ubuntu.go:182] provisioning hostname "force-systemd-flag-610060"
I0111 08:11:54.859827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:54.876832 3329885 main.go:144] libmachine: Using SSH client type: native
I0111 08:11:54.877152 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35813 <nil> <nil>}
I0111 08:11:54.877172 3329885 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-610060 && echo "force-systemd-flag-610060" | sudo tee /etc/hostname
I0111 08:11:55.043732 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
I0111 08:11:55.043827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.066688 3329885 main.go:144] libmachine: Using SSH client type: native
I0111 08:11:55.067032 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35813 <nil> <nil>}
I0111 08:11:55.067054 3329885 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-610060' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-610060/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-610060' | sudo tee -a /etc/hosts;
fi
fi
I0111 08:11:55.224621 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0111 08:11:55.224644 3329885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
I0111 08:11:55.224663 3329885 ubuntu.go:190] setting up certificates
I0111 08:11:55.224672 3329885 provision.go:84] configureAuth start
I0111 08:11:55.224733 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
I0111 08:11:55.242257 3329885 provision.go:143] copyHostCerts
I0111 08:11:55.242309 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
I0111 08:11:55.242342 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
I0111 08:11:55.242359 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
I0111 08:11:55.242440 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
I0111 08:11:55.242520 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
I0111 08:11:55.242542 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
I0111 08:11:55.242556 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
I0111 08:11:55.242586 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
I0111 08:11:55.242658 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
I0111 08:11:55.242679 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
I0111 08:11:55.242686 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
I0111 08:11:55.242713 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
I0111 08:11:55.242763 3329885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-610060 san=[127.0.0.1 192.168.85.2 force-systemd-flag-610060 localhost minikube]
I0111 08:11:55.423643 3329885 provision.go:177] copyRemoteCerts
I0111 08:11:55.423714 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0111 08:11:55.423760 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.442089 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
I0111 08:11:55.544114 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0111 08:11:55.544174 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0111 08:11:55.562451 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem -> /etc/docker/server.pem
I0111 08:11:55.562560 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0111 08:11:55.579624 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0111 08:11:55.579720 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0111 08:11:55.597215 3329885 provision.go:87] duration metric: took 372.519842ms to configureAuth
I0111 08:11:55.597285 3329885 ubuntu.go:206] setting minikube options for container-runtime
I0111 08:11:55.597493 3329885 config.go:182] Loaded profile config "force-systemd-flag-610060": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 08:11:55.597509 3329885 machine.go:97] duration metric: took 3.916909939s to provisionDockerMachine
I0111 08:11:55.597517 3329885 client.go:176] duration metric: took 9.364722727s to LocalClient.Create
I0111 08:11:55.597537 3329885 start.go:167] duration metric: took 9.364789212s to libmachine.API.Create "force-systemd-flag-610060"
I0111 08:11:55.597550 3329885 start.go:293] postStartSetup for "force-systemd-flag-610060" (driver="docker")
I0111 08:11:55.597559 3329885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0111 08:11:55.597617 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0111 08:11:55.597673 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.614880 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
I0111 08:11:55.720221 3329885 ssh_runner.go:195] Run: cat /etc/os-release
I0111 08:11:55.723472 3329885 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0111 08:11:55.723501 3329885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0111 08:11:55.723512 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
I0111 08:11:55.723589 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
I0111 08:11:55.723683 3329885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
I0111 08:11:55.723702 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /etc/ssl/certs/31244842.pem
I0111 08:11:55.723821 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0111 08:11:55.731084 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
I0111 08:11:55.748115 3329885 start.go:296] duration metric: took 150.541395ms for postStartSetup
I0111 08:11:55.748506 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
I0111 08:11:55.765507 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
I0111 08:11:55.765856 3329885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0111 08:11:55.765912 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.782246 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
I0111 08:11:55.885136 3329885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0111 08:11:55.889854 3329885 start.go:128] duration metric: took 9.66076858s to createHost
I0111 08:11:55.889877 3329885 start.go:83] releasing machines lock for "force-systemd-flag-610060", held for 9.660899777s
I0111 08:11:55.889946 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
I0111 08:11:55.909521 3329885 ssh_runner.go:195] Run: cat /version.json
I0111 08:11:55.909572 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.909672 3329885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0111 08:11:55.909730 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
I0111 08:11:55.929746 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
I0111 08:11:55.940401 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
I0111 08:11:56.137299 3329885 ssh_runner.go:195] Run: systemctl --version
I0111 08:11:56.144072 3329885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0111 08:11:56.149649 3329885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0111 08:11:56.149741 3329885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0111 08:11:56.178398 3329885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0111 08:11:56.178426 3329885 start.go:496] detecting cgroup driver to use...
I0111 08:11:56.178440 3329885 start.go:500] using "systemd" cgroup driver as enforced via flags
I0111 08:11:56.178497 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0111 08:11:56.194017 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0111 08:11:56.207355 3329885 docker.go:218] disabling cri-docker service (if available) ...
I0111 08:11:56.207437 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0111 08:11:56.225243 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0111 08:11:56.244325 3329885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0111 08:11:56.364184 3329885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0111 08:11:56.477116 3329885 docker.go:234] disabling docker service ...
I0111 08:11:56.477205 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0111 08:11:56.497704 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0111 08:11:56.510638 3329885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0111 08:11:56.657297 3329885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0111 08:11:56.780195 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0111 08:11:56.793449 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:11:56.808590 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0111 08:11:56.818025 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0111 08:11:56.826953 3329885 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I0111 08:11:56.827070 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0111 08:11:56.836326 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:11:56.845203 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0111 08:11:56.854138 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:11:56.862604 3329885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0111 08:11:56.870988 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0111 08:11:56.879524 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0111 08:11:56.888444 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0111 08:11:56.897577 3329885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0111 08:11:56.905221 3329885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0111 08:11:56.912587 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:57.029083 3329885 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0111 08:11:57.166782 3329885 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I0111 08:11:57.166926 3329885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0111 08:11:57.170932 3329885 start.go:574] Will wait 60s for crictl version
I0111 08:11:57.171048 3329885 ssh_runner.go:195] Run: which crictl
I0111 08:11:57.174867 3329885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0111 08:11:57.199898 3329885 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I0111 08:11:57.199981 3329885 ssh_runner.go:195] Run: containerd --version
I0111 08:11:57.219306 3329885 ssh_runner.go:195] Run: containerd --version
I0111 08:11:57.244995 3329885 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I0111 08:11:57.248122 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:11:57.264038 3329885 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0111 08:11:57.267824 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:11:57.277801 3329885 kubeadm.go:884] updating cluster {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0111 08:11:57.278152 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:11:57.278240 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
I0111 08:11:57.315254 3329885 containerd.go:635] all images are preloaded for containerd runtime.
I0111 08:11:57.315275 3329885 containerd.go:542] Images already preloaded, skipping extraction
I0111 08:11:57.315336 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
I0111 08:11:57.349393 3329885 containerd.go:635] all images are preloaded for containerd runtime.
I0111 08:11:57.349415 3329885 cache_images.go:86] Images are preloaded, skipping loading
I0111 08:11:57.349423 3329885 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I0111 08:11:57.349517 3329885 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-610060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0111 08:11:57.349582 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I0111 08:11:57.382639 3329885 cni.go:84] Creating CNI manager for ""
I0111 08:11:57.382663 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0111 08:11:57.382685 3329885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0111 08:11:57.382708 3329885 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-610060 NodeName:force-systemd-flag-610060 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0111 08:11:57.382828 3329885 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-610060"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0111 08:11:57.382905 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0111 08:11:57.390559 3329885 binaries.go:51] Found k8s binaries, skipping transfer
I0111 08:11:57.390630 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0111 08:11:57.398214 3329885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I0111 08:11:57.410850 3329885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0111 08:11:57.424327 3329885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I0111 08:11:57.436984 3329885 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0111 08:11:57.440400 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:11:57.450402 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:57.573600 3329885 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0111 08:11:57.590952 3329885 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060 for IP: 192.168.85.2
I0111 08:11:57.590987 3329885 certs.go:195] generating shared ca certs ...
I0111 08:11:57.591004 3329885 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:57.591198 3329885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
I0111 08:11:57.591246 3329885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
I0111 08:11:57.591260 3329885 certs.go:257] generating profile certs ...
I0111 08:11:57.591327 3329885 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key
I0111 08:11:57.591359 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt with IP's: []
I0111 08:11:58.180659 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt ...
I0111 08:11:58.180706 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt: {Name:mk9bd0b635b7181a879895561a6d686f28614647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.180963 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key ...
I0111 08:11:58.180982 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key: {Name:mkfe2120f2e6288c7ad6ca3b08d9dccc6b76b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.181090 3329885 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120
I0111 08:11:58.181117 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0111 08:11:58.711099 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 ...
I0111 08:11:58.711132 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120: {Name:mke960834fa45cb1bccf7b579ab4a287f777445c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.711369 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 ...
I0111 08:11:58.711385 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120: {Name:mkf73783f074957828edc09fa9ea5a4548656c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.711473 3329885 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt
I0111 08:11:58.711554 3329885 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key
I0111 08:11:58.711646 3329885 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key
I0111 08:11:58.711665 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt with IP's: []
I0111 08:11:58.912664 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt ...
I0111 08:11:58.912696 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt: {Name:mk922fc5010cb627196768e155857c21dcb7d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.912882 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key ...
I0111 08:11:58.912895 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key: {Name:mk6bffbe07eace11218581bafe3df67bbad9745d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:58.912983 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0111 08:11:58.913003 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0111 08:11:58.913015 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0111 08:11:58.913030 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0111 08:11:58.913042 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0111 08:11:58.913059 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0111 08:11:58.913074 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0111 08:11:58.913089 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0111 08:11:58.913153 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
W0111 08:11:58.913196 3329885 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
I0111 08:11:58.913209 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
I0111 08:11:58.913237 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
I0111 08:11:58.913264 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
I0111 08:11:58.913300 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
I0111 08:11:58.913351 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
I0111 08:11:58.913383 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:58.913398 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem -> /usr/share/ca-certificates/3124484.pem
I0111 08:11:58.913409 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /usr/share/ca-certificates/31244842.pem
I0111 08:11:58.913910 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0111 08:11:58.934507 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0111 08:11:58.955410 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0111 08:11:58.973948 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0111 08:11:58.992574 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0111 08:11:59.013013 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0111 08:11:59.031246 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0111 08:11:59.051982 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0111 08:11:59.070932 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0111 08:11:59.088240 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
I0111 08:11:59.106023 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
I0111 08:11:59.124702 3329885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0111 08:11:59.137901 3329885 ssh_runner.go:195] Run: openssl version
I0111 08:11:59.144226 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
I0111 08:11:59.152606 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
I0111 08:11:59.160337 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
I0111 08:11:59.164263 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
I0111 08:11:59.164416 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
I0111 08:11:59.206887 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0111 08:11:59.214713 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
I0111 08:11:59.222480 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:59.230140 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0111 08:11:59.238568 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:59.242380 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:59.242451 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:59.283430 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0111 08:11:59.291242 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0111 08:11:59.299017 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
I0111 08:11:59.306462 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
I0111 08:11:59.314159 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
I0111 08:11:59.318199 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
I0111 08:11:59.318267 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
I0111 08:11:59.364365 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0111 08:11:59.372018 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
I0111 08:11:59.379565 3329885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0111 08:11:59.383252 3329885 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0111 08:11:59.383305 3329885 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:11:59.383396 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0111 08:11:59.383462 3329885 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0111 08:11:59.409520 3329885 cri.go:96] found id: ""
I0111 08:11:59.409625 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0111 08:11:59.417554 3329885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0111 08:11:59.425266 3329885 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:11:59.425333 3329885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:11:59.433014 3329885 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:11:59.433033 3329885 kubeadm.go:158] found existing configuration files:
I0111 08:11:59.433106 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:11:59.441062 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:11:59.441144 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:11:59.448415 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:11:59.456088 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:11:59.456158 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:11:59.463696 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:11:59.471473 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:11:59.471550 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:11:59.479035 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:11:59.486818 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:11:59.486907 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:11:59.494369 3329885 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:11:59.531469 3329885 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:11:59.531535 3329885 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:11:59.616591 3329885 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:11:59.616667 3329885 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:11:59.616707 3329885 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:11:59.616757 3329885 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:11:59.616809 3329885 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:11:59.616860 3329885 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:11:59.616913 3329885 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:11:59.616966 3329885 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:11:59.617026 3329885 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:11:59.617076 3329885 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:11:59.617128 3329885 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:11:59.617177 3329885 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:11:59.680028 3329885 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:11:59.680143 3329885 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:11:59.680238 3329885 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:11:59.688820 3329885 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:11:59.695891 3329885 out.go:252] - Generating certificates and keys ...
I0111 08:11:59.696068 3329885 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:11:59.696180 3329885 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:11:59.888200 3329885 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0111 08:12:00.676065 3329885 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0111 08:12:00.930267 3329885 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0111 08:12:01.030505 3329885 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0111 08:12:01.283889 3329885 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0111 08:12:01.284218 3329885 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0111 08:12:01.834107 3329885 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0111 08:12:01.834425 3329885 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0111 08:12:01.879677 3329885 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0111 08:12:02.051499 3329885 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0111 08:12:02.379706 3329885 kubeadm.go:319] [certs] Generating "sa" key and public key
I0111 08:12:02.379938 3329885 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:12:02.595602 3329885 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:12:03.030736 3329885 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:12:03.387448 3329885 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:12:03.538058 3329885 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:12:04.600361 3329885 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:12:04.601328 3329885 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:12:04.604433 3329885 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:12:04.608234 3329885 out.go:252] - Booting up control plane ...
I0111 08:12:04.608345 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:12:04.608424 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:12:04.609166 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:12:04.626788 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:12:04.626897 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:12:04.634879 3329885 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:12:04.635665 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:12:04.635945 3329885 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:12:04.773900 3329885 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:12:04.774028 3329885 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:16:04.774066 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001234218s
I0111 08:16:04.774104 3329885 kubeadm.go:319]
I0111 08:16:04.774255 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:16:04.774569 3329885 kubeadm.go:319] - The kubelet is not running
I0111 08:16:04.774865 3329885 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:16:04.774874 3329885 kubeadm.go:319]
I0111 08:16:04.775174 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:16:04.775233 3329885 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:16:04.775288 3329885 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0111 08:16:04.775294 3329885 kubeadm.go:319]
I0111 08:16:04.781053 3329885 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:16:04.781556 3329885 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:16:04.781670 3329885 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:16:04.781954 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I0111 08:16:04.781976 3329885 kubeadm.go:319]
I0111 08:16:04.782060 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W0111 08:16:04.782196 3329885 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001234218s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001234218s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I0111 08:16:04.782323 3329885 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0111 08:16:05.213214 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0111 08:16:05.227519 3329885 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:16:05.227584 3329885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:16:05.237077 3329885 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:16:05.237098 3329885 kubeadm.go:158] found existing configuration files:
I0111 08:16:05.237153 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:16:05.245177 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:16:05.245249 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:16:05.253388 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:16:05.262192 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:16:05.262276 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:16:05.270385 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:16:05.278493 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:16:05.278558 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:16:05.286603 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:16:05.294682 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:16:05.294754 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:16:05.302943 3329885 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:16:05.342985 3329885 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:16:05.343160 3329885 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:16:05.415732 3329885 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:16:05.415812 3329885 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:16:05.415853 3329885 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:16:05.415903 3329885 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:16:05.415955 3329885 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:16:05.416005 3329885 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:16:05.416055 3329885 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:16:05.416107 3329885 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:16:05.416158 3329885 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:16:05.416207 3329885 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:16:05.416260 3329885 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:16:05.416342 3329885 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:16:05.488509 3329885 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:16:05.488621 3329885 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:16:05.488712 3329885 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:16:05.496708 3329885 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:16:05.500018 3329885 out.go:252] - Generating certificates and keys ...
I0111 08:16:05.500132 3329885 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:16:05.500212 3329885 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:16:05.500346 3329885 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0111 08:16:05.500426 3329885 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I0111 08:16:05.500509 3329885 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I0111 08:16:05.500579 3329885 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I0111 08:16:05.500657 3329885 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I0111 08:16:05.500734 3329885 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I0111 08:16:05.500836 3329885 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0111 08:16:05.500927 3329885 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0111 08:16:05.500980 3329885 kubeadm.go:319] [certs] Using the existing "sa" key
I0111 08:16:05.501053 3329885 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:16:05.594753 3329885 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:16:05.890561 3329885 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:16:06.331295 3329885 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:16:06.574863 3329885 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:16:06.785086 3329885 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:16:06.785655 3329885 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:16:06.788115 3329885 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:16:06.790955 3329885 out.go:252] - Booting up control plane ...
I0111 08:16:06.791057 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:16:06.791134 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:16:06.791201 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:16:06.812845 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:16:06.813197 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:16:06.820488 3329885 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:16:06.820834 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:16:06.820880 3329885 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:16:06.986413 3329885 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:16:06.986538 3329885 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:20:06.987125 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117732s
I0111 08:20:06.987156 3329885 kubeadm.go:319]
I0111 08:20:06.987538 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:20:06.987650 3329885 kubeadm.go:319] - The kubelet is not running
I0111 08:20:06.987914 3329885 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:20:06.987923 3329885 kubeadm.go:319]
I0111 08:20:06.988383 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:20:06.988449 3329885 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:20:06.988619 3329885 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0111 08:20:06.988627 3329885 kubeadm.go:319]
I0111 08:20:06.994037 3329885 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:20:06.994459 3329885 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:20:06.994571 3329885 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:20:06.994810 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I0111 08:20:06.994819 3329885 kubeadm.go:319]
I0111 08:20:06.994887 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0111 08:20:06.994944 3329885 kubeadm.go:403] duration metric: took 8m7.611643479s to StartCluster
I0111 08:20:06.994981 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0111 08:20:06.995043 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0111 08:20:07.022617 3329885 cri.go:96] found id: ""
I0111 08:20:07.022699 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.022717 3329885 logs.go:284] No container was found matching "kube-apiserver"
I0111 08:20:07.022724 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0111 08:20:07.022804 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0111 08:20:07.050589 3329885 cri.go:96] found id: ""
I0111 08:20:07.050614 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.050623 3329885 logs.go:284] No container was found matching "etcd"
I0111 08:20:07.050629 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0111 08:20:07.050713 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0111 08:20:07.076582 3329885 cri.go:96] found id: ""
I0111 08:20:07.076608 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.076618 3329885 logs.go:284] No container was found matching "coredns"
I0111 08:20:07.076625 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0111 08:20:07.076719 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0111 08:20:07.103212 3329885 cri.go:96] found id: ""
I0111 08:20:07.103238 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.103247 3329885 logs.go:284] No container was found matching "kube-scheduler"
I0111 08:20:07.103254 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0111 08:20:07.103318 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0111 08:20:07.129632 3329885 cri.go:96] found id: ""
I0111 08:20:07.129709 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.129733 3329885 logs.go:284] No container was found matching "kube-proxy"
I0111 08:20:07.129744 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0111 08:20:07.129817 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0111 08:20:07.155361 3329885 cri.go:96] found id: ""
I0111 08:20:07.155388 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.155397 3329885 logs.go:284] No container was found matching "kube-controller-manager"
I0111 08:20:07.155404 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0111 08:20:07.155466 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0111 08:20:07.180698 3329885 cri.go:96] found id: ""
I0111 08:20:07.180793 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.180810 3329885 logs.go:284] No container was found matching "kindnet"
I0111 08:20:07.180822 3329885 logs.go:123] Gathering logs for kubelet ...
I0111 08:20:07.180834 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0111 08:20:07.237611 3329885 logs.go:123] Gathering logs for dmesg ...
I0111 08:20:07.237644 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0111 08:20:07.252588 3329885 logs.go:123] Gathering logs for describe nodes ...
I0111 08:20:07.252615 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0111 08:20:07.317178 3329885 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:20:07.309153 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.309712 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311194 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311610 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.313064 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0111 08:20:07.309153 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.309712 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311194 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311610 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.313064 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0111 08:20:07.317197 3329885 logs.go:123] Gathering logs for containerd ...
I0111 08:20:07.317211 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0111 08:20:07.357000 3329885 logs.go:123] Gathering logs for container status ...
I0111 08:20:07.357043 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0111 08:20:07.386881 3329885 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.386931 3329885 out.go:285] *
*
W0111 08:20:07.386981 3329885 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.386998 3329885 out.go:285] *
*
W0111 08:20:07.387249 3329885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0111 08:20:07.394169 3329885 out.go:203]
W0111 08:20:07.397066 3329885 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.397111 3329885 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0111 08:20:07.397136 3329885 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0111 08:20:07.400215 3329885 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-610060 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-11 08:20:07.804164748 +0000 UTC m=+3265.838463320
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-610060
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-610060:
-- stdout --
[
{
"Id": "13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332",
"Created": "2026-01-11T08:11:50.813449934Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3330323,
"ExitCode": 0,
"Error": "",
"StartedAt": "2026-01-11T08:11:50.887790565Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
"ResolvConfPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/hostname",
"HostsPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/hosts",
"LogPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332-json.log",
"Name": "/force-systemd-flag-610060",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-610060:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-610060",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332",
"LowerDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb-init/diff:/var/lib/docker/overlay2/df463cec8bfb6e167fe65d2de959d2835d839df5d29dad0284e7abf6afbac443/diff",
"MergedDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/merged",
"UpperDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/diff",
"WorkDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-610060",
"Source": "/var/lib/docker/volumes/force-systemd-flag-610060/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-610060",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-610060",
"name.minikube.sigs.k8s.io": "force-systemd-flag-610060",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e09bffb69bfeaa2e9e1334ad39b7ef5deb66204a099396be0fedeac63070bd3b",
"SandboxKey": "/var/run/docker/netns/e09bffb69bfe",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35813"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35814"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35817"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35815"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35816"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-610060": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ae:95:b0:9d:ee:ba",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "a1d8b67bcadc5f12a2e757111f8d5de32531915336d5c492f2148c9847055be3",
"EndpointID": "2f08efa16d250bd664c6cebb474c0a73513564f8eebe1a31d64c958ff5d39f91",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-610060",
"13258c8511db"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-610060 -n force-systemd-flag-610060
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-610060 -n force-systemd-flag-610060: exit status 6 (311.010865ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0111 08:20:08.117839 3358915 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-610060" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-610060 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ delete │ -p cert-options-554375 │ cert-options-554375 │ jenkins │ v1.37.0 │ 11 Jan 26 08:14 UTC │ 11 Jan 26 08:14 UTC │
│ start │ -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:14 UTC │ 11 Jan 26 08:15 UTC │
│ addons │ enable metrics-server -p old-k8s-version-334404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
│ stop │ -p old-k8s-version-334404 --alsologtostderr -v=3 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
│ addons │ enable dashboard -p old-k8s-version-334404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
│ start │ -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:16 UTC │
│ image │ old-k8s-version-334404 image list --format=json │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
│ pause │ -p old-k8s-version-334404 --alsologtostderr -v=1 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
│ unpause │ -p old-k8s-version-334404 --alsologtostderr -v=1 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
│ delete │ -p old-k8s-version-334404 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
│ delete │ -p old-k8s-version-334404 │ old-k8s-version-334404 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
│ start │ -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:17 UTC │
│ addons │ enable metrics-server -p no-preload-563183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
│ stop │ -p no-preload-563183 --alsologtostderr -v=3 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
│ addons │ enable dashboard -p no-preload-563183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
│ start │ -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:18 UTC │
│ image │ no-preload-563183 image list --format=json │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
│ pause │ -p no-preload-563183 --alsologtostderr -v=1 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
│ unpause │ -p no-preload-563183 --alsologtostderr -v=1 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
│ delete │ -p no-preload-563183 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:19 UTC │
│ delete │ -p no-preload-563183 │ no-preload-563183 │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
│ start │ -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-239792 │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
│ addons │ enable metrics-server -p embed-certs-239792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-239792 │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
│ stop │ -p embed-certs-239792 --alsologtostderr -v=3 │ embed-certs-239792 │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ │
│ ssh │ force-systemd-flag-610060 ssh cat /etc/containerd/config.toml │ force-systemd-flag-610060 │ jenkins │ v1.37.0 │ 11 Jan 26 08:20 UTC │ 11 Jan 26 08:20 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2026/01/11 08:19:02
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0111 08:19:02.476072 3354790 out.go:360] Setting OutFile to fd 1 ...
I0111 08:19:02.476232 3354790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:19:02.476242 3354790 out.go:374] Setting ErrFile to fd 2...
I0111 08:19:02.476248 3354790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:19:02.476560 3354790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 08:19:02.477023 3354790 out.go:368] Setting JSON to false
I0111 08:19:02.477864 3354790 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":50494,"bootTime":1768069049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0111 08:19:02.477934 3354790 start.go:143] virtualization:
I0111 08:19:02.484119 3354790 out.go:179] * [embed-certs-239792] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0111 08:19:02.487599 3354790 out.go:179] - MINIKUBE_LOCATION=22402
I0111 08:19:02.487716 3354790 notify.go:221] Checking for updates...
I0111 08:19:02.494008 3354790 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0111 08:19:02.497155 3354790 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
I0111 08:19:02.500131 3354790 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
I0111 08:19:02.503189 3354790 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0111 08:19:02.506234 3354790 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0111 08:19:02.509650 3354790 config.go:182] Loaded profile config "force-systemd-flag-610060": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 08:19:02.509774 3354790 driver.go:422] Setting default libvirt URI to qemu:///system
I0111 08:19:02.546473 3354790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0111 08:19:02.546595 3354790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:19:02.629622 3354790 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:19:02.620293113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:19:02.629728 3354790 docker.go:319] overlay module found
I0111 08:19:02.634970 3354790 out.go:179] * Using the docker driver based on user configuration
I0111 08:19:02.637937 3354790 start.go:309] selected driver: docker
I0111 08:19:02.637962 3354790 start.go:928] validating driver "docker" against <nil>
I0111 08:19:02.637976 3354790 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0111 08:19:02.638745 3354790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:19:02.699375 3354790 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:19:02.689814575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:19:02.699537 3354790 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0111 08:19:02.699772 3354790 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0111 08:19:02.702839 3354790 out.go:179] * Using Docker driver with root privileges
I0111 08:19:02.705866 3354790 cni.go:84] Creating CNI manager for ""
I0111 08:19:02.705941 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0111 08:19:02.705955 3354790 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I0111 08:19:02.706039 3354790 start.go:353] cluster config:
{Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:19:02.709108 3354790 out.go:179] * Starting "embed-certs-239792" primary control-plane node in "embed-certs-239792" cluster
I0111 08:19:02.711876 3354790 cache.go:134] Beginning downloading kic base image for docker with containerd
I0111 08:19:02.714741 3354790 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
I0111 08:19:02.717565 3354790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
I0111 08:19:02.717566 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:19:02.717637 3354790 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I0111 08:19:02.717646 3354790 cache.go:65] Caching tarball of preloaded images
I0111 08:19:02.717727 3354790 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0111 08:19:02.717737 3354790 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I0111 08:19:02.717847 3354790 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json ...
I0111 08:19:02.717869 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json: {Name:mk6ff7aa76924208f5adafe031a39c23e80e0d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:02.736012 3354790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
I0111 08:19:02.736037 3354790 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
I0111 08:19:02.736061 3354790 cache.go:243] Successfully downloaded all kic artifacts
I0111 08:19:02.736093 3354790 start.go:360] acquireMachinesLock for embed-certs-239792: {Name:mk5b08453b2b6902642bd60cad5e87b3738323be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0111 08:19:02.736214 3354790 start.go:364] duration metric: took 97.705µs to acquireMachinesLock for "embed-certs-239792"
I0111 08:19:02.736246 3354790 start.go:93] Provisioning new machine with config: &{Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0111 08:19:02.736348 3354790 start.go:125] createHost starting for "" (driver="docker")
I0111 08:19:02.739776 3354790 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0111 08:19:02.740013 3354790 start.go:159] libmachine.API.Create for "embed-certs-239792" (driver="docker")
I0111 08:19:02.740049 3354790 client.go:173] LocalClient.Create starting
I0111 08:19:02.740123 3354790 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
I0111 08:19:02.740162 3354790 main.go:144] libmachine: Decoding PEM data...
I0111 08:19:02.740181 3354790 main.go:144] libmachine: Parsing certificate...
I0111 08:19:02.740237 3354790 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
I0111 08:19:02.740266 3354790 main.go:144] libmachine: Decoding PEM data...
I0111 08:19:02.740277 3354790 main.go:144] libmachine: Parsing certificate...
I0111 08:19:02.740767 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 08:19:02.757322 3354790 cli_runner.go:211] docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 08:19:02.757411 3354790 network_create.go:284] running [docker network inspect embed-certs-239792] to gather additional debugging logs...
I0111 08:19:02.757436 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792
W0111 08:19:02.772184 3354790 cli_runner.go:211] docker network inspect embed-certs-239792 returned with exit code 1
I0111 08:19:02.772222 3354790 network_create.go:287] error running [docker network inspect embed-certs-239792]: docker network inspect embed-certs-239792: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-239792 not found
I0111 08:19:02.772236 3354790 network_create.go:289] output of [docker network inspect embed-certs-239792]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-239792 not found
** /stderr **
I0111 08:19:02.772460 3354790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:19:02.789854 3354790 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
I0111 08:19:02.790351 3354790 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
I0111 08:19:02.790630 3354790 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
I0111 08:19:02.791263 3354790 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd120}
I0111 08:19:02.791296 3354790 network_create.go:124] attempt to create docker network embed-certs-239792 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0111 08:19:02.791411 3354790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-239792 embed-certs-239792
I0111 08:19:02.855591 3354790 network_create.go:108] docker network embed-certs-239792 192.168.76.0/24 created
I0111 08:19:02.855627 3354790 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-239792" container
I0111 08:19:02.855706 3354790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0111 08:19:02.872321 3354790 cli_runner.go:164] Run: docker volume create embed-certs-239792 --label name.minikube.sigs.k8s.io=embed-certs-239792 --label created_by.minikube.sigs.k8s.io=true
I0111 08:19:02.889956 3354790 oci.go:103] Successfully created a docker volume embed-certs-239792
I0111 08:19:02.890047 3354790 cli_runner.go:164] Run: docker run --rm --name embed-certs-239792-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-239792 --entrypoint /usr/bin/test -v embed-certs-239792:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
I0111 08:19:03.403926 3354790 oci.go:107] Successfully prepared a docker volume embed-certs-239792
I0111 08:19:03.403996 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:19:03.404008 3354790 kic.go:194] Starting extracting preloaded images to volume ...
I0111 08:19:03.404081 3354790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-239792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
I0111 08:19:07.260156 3354790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-239792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.856039779s)
I0111 08:19:07.260187 3354790 kic.go:203] duration metric: took 3.856175735s to extract preloaded images to volume ...
W0111 08:19:07.260369 3354790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0111 08:19:07.260487 3354790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0111 08:19:07.320710 3354790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-239792 --name embed-certs-239792 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-239792 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-239792 --network embed-certs-239792 --ip 192.168.76.2 --volume embed-certs-239792:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
I0111 08:19:07.656032 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Running}}
I0111 08:19:07.680144 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:07.703298 3354790 cli_runner.go:164] Run: docker exec embed-certs-239792 stat /var/lib/dpkg/alternatives/iptables
I0111 08:19:07.754381 3354790 oci.go:144] the created container "embed-certs-239792" has a running status.
I0111 08:19:07.754409 3354790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa...
I0111 08:19:07.906741 3354790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0111 08:19:07.930180 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:07.952075 3354790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0111 08:19:07.952093 3354790 kic_runner.go:114] Args: [docker exec --privileged embed-certs-239792 chown docker:docker /home/docker/.ssh/authorized_keys]
I0111 08:19:08.000055 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:08.023510 3354790 machine.go:94] provisionDockerMachine start ...
I0111 08:19:08.023600 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:08.054227 3354790 main.go:144] libmachine: Using SSH client type: native
I0111 08:19:08.054566 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35843 <nil> <nil>}
I0111 08:19:08.054581 3354790 main.go:144] libmachine: About to run SSH command:
hostname
I0111 08:19:08.055154 3354790 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39750->127.0.0.1:35843: read: connection reset by peer
I0111 08:19:11.208272 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-239792
I0111 08:19:11.208368 3354790 ubuntu.go:182] provisioning hostname "embed-certs-239792"
I0111 08:19:11.208464 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:11.226933 3354790 main.go:144] libmachine: Using SSH client type: native
I0111 08:19:11.227248 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35843 <nil> <nil>}
I0111 08:19:11.227265 3354790 main.go:144] libmachine: About to run SSH command:
sudo hostname embed-certs-239792 && echo "embed-certs-239792" | sudo tee /etc/hostname
I0111 08:19:11.385593 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-239792
I0111 08:19:11.385776 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:11.403545 3354790 main.go:144] libmachine: Using SSH client type: native
I0111 08:19:11.403849 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 35843 <nil> <nil>}
I0111 08:19:11.403865 3354790 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-239792' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239792/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-239792' | sudo tee -a /etc/hosts;
fi
fi
I0111 08:19:11.552833 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0111 08:19:11.552864 3354790 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
I0111 08:19:11.552886 3354790 ubuntu.go:190] setting up certificates
I0111 08:19:11.552895 3354790 provision.go:84] configureAuth start
I0111 08:19:11.552968 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
I0111 08:19:11.570815 3354790 provision.go:143] copyHostCerts
I0111 08:19:11.570878 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
I0111 08:19:11.570886 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
I0111 08:19:11.570964 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
I0111 08:19:11.571059 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
I0111 08:19:11.571065 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
I0111 08:19:11.571089 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
I0111 08:19:11.571142 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
I0111 08:19:11.571147 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
I0111 08:19:11.571168 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
I0111 08:19:11.571211 3354790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239792 san=[127.0.0.1 192.168.76.2 embed-certs-239792 localhost minikube]
I0111 08:19:11.697056 3354790 provision.go:177] copyRemoteCerts
I0111 08:19:11.697128 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0111 08:19:11.697172 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:11.715383 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:11.820779 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0111 08:19:11.841550 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0111 08:19:11.862065 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0111 08:19:11.879110 3354790 provision.go:87] duration metric: took 326.189464ms to configureAuth
I0111 08:19:11.879191 3354790 ubuntu.go:206] setting minikube options for container-runtime
I0111 08:19:11.879406 3354790 config.go:182] Loaded profile config "embed-certs-239792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 08:19:11.879423 3354790 machine.go:97] duration metric: took 3.855894305s to provisionDockerMachine
I0111 08:19:11.879431 3354790 client.go:176] duration metric: took 9.139371359s to LocalClient.Create
I0111 08:19:11.879450 3354790 start.go:167] duration metric: took 9.139438393s to libmachine.API.Create "embed-certs-239792"
I0111 08:19:11.879457 3354790 start.go:293] postStartSetup for "embed-certs-239792" (driver="docker")
I0111 08:19:11.879472 3354790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0111 08:19:11.879532 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0111 08:19:11.879577 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:11.896718 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:12.008666 3354790 ssh_runner.go:195] Run: cat /etc/os-release
I0111 08:19:12.012948 3354790 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0111 08:19:12.012979 3354790 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0111 08:19:12.012992 3354790 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
I0111 08:19:12.013102 3354790 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
I0111 08:19:12.013217 3354790 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
I0111 08:19:12.013335 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0111 08:19:12.021915 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
I0111 08:19:12.040856 3354790 start.go:296] duration metric: took 161.384464ms for postStartSetup
I0111 08:19:12.041244 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
I0111 08:19:12.058351 3354790 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json ...
I0111 08:19:12.058661 3354790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0111 08:19:12.058712 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:12.075416 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:12.177555 3354790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0111 08:19:12.182584 3354790 start.go:128] duration metric: took 9.446219644s to createHost
I0111 08:19:12.182611 3354790 start.go:83] releasing machines lock for "embed-certs-239792", held for 9.446382725s
I0111 08:19:12.182706 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
I0111 08:19:12.199196 3354790 ssh_runner.go:195] Run: cat /version.json
I0111 08:19:12.199245 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:12.199265 3354790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0111 08:19:12.199322 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:12.217368 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:12.228021 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:12.420699 3354790 ssh_runner.go:195] Run: systemctl --version
I0111 08:19:12.427351 3354790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0111 08:19:12.431803 3354790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0111 08:19:12.431952 3354790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0111 08:19:12.460179 3354790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0111 08:19:12.460252 3354790 start.go:496] detecting cgroup driver to use...
I0111 08:19:12.460335 3354790 detect.go:175] detected "cgroupfs" cgroup driver on host os
I0111 08:19:12.460403 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0111 08:19:12.475336 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0111 08:19:12.488537 3354790 docker.go:218] disabling cri-docker service (if available) ...
I0111 08:19:12.488633 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0111 08:19:12.506770 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0111 08:19:12.525173 3354790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0111 08:19:12.671379 3354790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0111 08:19:12.797777 3354790 docker.go:234] disabling docker service ...
I0111 08:19:12.797849 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0111 08:19:12.821196 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0111 08:19:12.834644 3354790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0111 08:19:12.958805 3354790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0111 08:19:13.083683 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0111 08:19:13.097191 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:19:13.112629 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0111 08:19:13.122171 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0111 08:19:13.131227 3354790 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I0111 08:19:13.131338 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0111 08:19:13.140438 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:19:13.149884 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0111 08:19:13.158791 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:19:13.167913 3354790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0111 08:19:13.176867 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0111 08:19:13.186081 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0111 08:19:13.194914 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0111 08:19:13.204100 3354790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0111 08:19:13.212221 3354790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0111 08:19:13.219910 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:19:13.357285 3354790 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0111 08:19:13.486148 3354790 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I0111 08:19:13.486218 3354790 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0111 08:19:13.490524 3354790 start.go:574] Will wait 60s for crictl version
I0111 08:19:13.490602 3354790 ssh_runner.go:195] Run: which crictl
I0111 08:19:13.494183 3354790 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0111 08:19:13.518293 3354790 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I0111 08:19:13.518362 3354790 ssh_runner.go:195] Run: containerd --version
I0111 08:19:13.540885 3354790 ssh_runner.go:195] Run: containerd --version
I0111 08:19:13.565153 3354790 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I0111 08:19:13.568318 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:19:13.584596 3354790 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0111 08:19:13.588533 3354790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:19:13.598614 3354790 kubeadm.go:884] updating cluster {Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0111 08:19:13.598733 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 08:19:13.598807 3354790 ssh_runner.go:195] Run: sudo crictl images --output json
I0111 08:19:13.625049 3354790 containerd.go:635] all images are preloaded for containerd runtime.
I0111 08:19:13.625074 3354790 containerd.go:542] Images already preloaded, skipping extraction
I0111 08:19:13.625136 3354790 ssh_runner.go:195] Run: sudo crictl images --output json
I0111 08:19:13.650215 3354790 containerd.go:635] all images are preloaded for containerd runtime.
I0111 08:19:13.650241 3354790 cache_images.go:86] Images are preloaded, skipping loading
I0111 08:19:13.650249 3354790 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I0111 08:19:13.650347 3354790 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0111 08:19:13.650414 3354790 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I0111 08:19:13.680955 3354790 cni.go:84] Creating CNI manager for ""
I0111 08:19:13.680987 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0111 08:19:13.681010 3354790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0111 08:19:13.681034 3354790 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239792 NodeName:embed-certs-239792 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0111 08:19:13.681152 3354790 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-239792"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0111 08:19:13.681226 3354790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0111 08:19:13.689351 3354790 binaries.go:51] Found k8s binaries, skipping transfer
I0111 08:19:13.689477 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0111 08:19:13.697540 3354790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0111 08:19:13.710955 3354790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0111 08:19:13.724046 3354790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
I0111 08:19:13.737418 3354790 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0111 08:19:13.741282 3354790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:19:13.750773 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:19:13.867134 3354790 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0111 08:19:13.884767 3354790 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792 for IP: 192.168.76.2
I0111 08:19:13.884790 3354790 certs.go:195] generating shared ca certs ...
I0111 08:19:13.884807 3354790 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:13.884965 3354790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
I0111 08:19:13.885013 3354790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
I0111 08:19:13.885024 3354790 certs.go:257] generating profile certs ...
I0111 08:19:13.885081 3354790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key
I0111 08:19:13.885106 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt with IP's: []
I0111 08:19:13.943644 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt ...
I0111 08:19:13.943680 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt: {Name:mk0842e9d75ef0cac3d0190ac4ce2d004aad0c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:13.943905 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key ...
I0111 08:19:13.943924 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key: {Name:mk13bf00d0593eaef359005936d66f6485336a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:13.944042 3354790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368
I0111 08:19:13.944062 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0111 08:19:14.167158 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 ...
I0111 08:19:14.167189 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368: {Name:mkcdfd32e371e00491dd784b356a0a4a3153fe58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:14.167380 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368 ...
I0111 08:19:14.167397 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368: {Name:mk146db4887bf4ca2b0df30bc734540a70e203e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:14.167495 3354790 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt
I0111 08:19:14.167609 3354790 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key
I0111 08:19:14.167674 3354790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key
I0111 08:19:14.167695 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt with IP's: []
I0111 08:19:14.559448 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt ...
I0111 08:19:14.559481 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt: {Name:mk2f94ec1f047c5b25028b0889400ee386ffd990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:14.559666 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key ...
I0111 08:19:14.559681 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key: {Name:mkd47bf0b7a40371f79943d77ef7b1cce27993f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:14.559875 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
W0111 08:19:14.559919 3354790 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
I0111 08:19:14.559932 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
I0111 08:19:14.559958 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
I0111 08:19:14.559988 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
I0111 08:19:14.560017 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
I0111 08:19:14.560067 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
I0111 08:19:14.560648 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0111 08:19:14.580052 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0111 08:19:14.598952 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0111 08:19:14.617013 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0111 08:19:14.635698 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0111 08:19:14.653629 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0111 08:19:14.674994 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0111 08:19:14.693415 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0111 08:19:14.711755 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
I0111 08:19:14.730229 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0111 08:19:14.748738 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
I0111 08:19:14.766801 3354790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0111 08:19:14.780407 3354790 ssh_runner.go:195] Run: openssl version
I0111 08:19:14.786715 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
I0111 08:19:14.811910 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
I0111 08:19:14.832709 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
I0111 08:19:14.839038 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
I0111 08:19:14.839107 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
I0111 08:19:14.901727 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0111 08:19:14.909372 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
I0111 08:19:14.917084 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0111 08:19:14.924955 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0111 08:19:14.932648 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0111 08:19:14.936491 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
I0111 08:19:14.936605 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0111 08:19:14.978152 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0111 08:19:14.986215 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0111 08:19:14.994088 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
I0111 08:19:15.007465 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
I0111 08:19:15.020971 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
I0111 08:19:15.025889 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
I0111 08:19:15.025991 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
I0111 08:19:15.070662 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0111 08:19:15.078790 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
I0111 08:19:15.087251 3354790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0111 08:19:15.091300 3354790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0111 08:19:15.091360 3354790 kubeadm.go:401] StartCluster: {Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:19:15.091448 3354790 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0111 08:19:15.091523 3354790 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0111 08:19:15.131620 3354790 cri.go:96] found id: ""
I0111 08:19:15.131712 3354790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0111 08:19:15.139974 3354790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0111 08:19:15.148194 3354790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:19:15.148315 3354790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:19:15.156577 3354790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:19:15.156602 3354790 kubeadm.go:158] found existing configuration files:
I0111 08:19:15.156723 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:19:15.165010 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:19:15.165089 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:19:15.172999 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:19:15.181101 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:19:15.181203 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:19:15.188973 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:19:15.196881 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:19:15.196952 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:19:15.204688 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:19:15.212543 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:19:15.212661 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:19:15.220182 3354790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:19:15.259052 3354790 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:19:15.259180 3354790 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:19:15.329844 3354790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:19:15.329983 3354790 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:19:15.330045 3354790 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:19:15.330117 3354790 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:19:15.330181 3354790 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:19:15.330260 3354790 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:19:15.330334 3354790 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:19:15.330424 3354790 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:19:15.330495 3354790 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:19:15.330569 3354790 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:19:15.330637 3354790 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:19:15.330714 3354790 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:19:15.398924 3354790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:19:15.399107 3354790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:19:15.399236 3354790 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:19:15.408650 3354790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:19:15.415105 3354790 out.go:252] - Generating certificates and keys ...
I0111 08:19:15.415212 3354790 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:19:15.415286 3354790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:19:15.681486 3354790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0111 08:19:15.895857 3354790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0111 08:19:16.383963 3354790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0111 08:19:16.487976 3354790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0111 08:19:16.714930 3354790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0111 08:19:16.715311 3354790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-239792 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0111 08:19:16.797140 3354790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0111 08:19:16.797507 3354790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-239792 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0111 08:19:17.442442 3354790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0111 08:19:17.923345 3354790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0111 08:19:18.300542 3354790 kubeadm.go:319] [certs] Generating "sa" key and public key
I0111 08:19:18.300826 3354790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:19:18.587740 3354790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:19:18.728555 3354790 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:19:19.148556 3354790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:19:19.770672 3354790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:19:19.915779 3354790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:19:19.916501 3354790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:19:19.919187 3354790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:19:19.922908 3354790 out.go:252] - Booting up control plane ...
I0111 08:19:19.923024 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:19:19.923115 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:19:19.923182 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:19:19.939571 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:19:19.939937 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:19:19.947697 3354790 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:19:19.948188 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:19:19.948236 3354790 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:19:20.099965 3354790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:19:20.100087 3354790 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:19:20.603577 3354790 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.790259ms
I0111 08:19:20.607424 3354790 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I0111 08:19:20.607528 3354790 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I0111 08:19:20.607624 3354790 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I0111 08:19:20.608165 3354790 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I0111 08:19:23.617247 3354790 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.008856075s
I0111 08:19:24.679203 3354790 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.070512111s
I0111 08:19:26.610243 3354790 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002387831s
I0111 08:19:26.649095 3354790 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0111 08:19:26.678273 3354790 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0111 08:19:26.696184 3354790 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I0111 08:19:26.696463 3354790 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-239792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0111 08:19:26.710449 3354790 kubeadm.go:319] [bootstrap-token] Using token: y49yks.7byb0evlqnwu15qk
I0111 08:19:26.713334 3354790 out.go:252] - Configuring RBAC rules ...
I0111 08:19:26.713461 3354790 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0111 08:19:26.724335 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0111 08:19:26.734879 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0111 08:19:26.741639 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0111 08:19:26.748473 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0111 08:19:26.753106 3354790 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0111 08:19:27.017622 3354790 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0111 08:19:27.447726 3354790 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I0111 08:19:28.021172 3354790 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I0111 08:19:28.022489 3354790 kubeadm.go:319]
I0111 08:19:28.022572 3354790 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I0111 08:19:28.022583 3354790 kubeadm.go:319]
I0111 08:19:28.022662 3354790 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I0111 08:19:28.022672 3354790 kubeadm.go:319]
I0111 08:19:28.022697 3354790 kubeadm.go:319] mkdir -p $HOME/.kube
I0111 08:19:28.022760 3354790 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0111 08:19:28.022814 3354790 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0111 08:19:28.022822 3354790 kubeadm.go:319]
I0111 08:19:28.022877 3354790 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I0111 08:19:28.022885 3354790 kubeadm.go:319]
I0111 08:19:28.022933 3354790 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I0111 08:19:28.022941 3354790 kubeadm.go:319]
I0111 08:19:28.022993 3354790 kubeadm.go:319] You should now deploy a pod network to the cluster.
I0111 08:19:28.023072 3354790 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0111 08:19:28.023144 3354790 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0111 08:19:28.023152 3354790 kubeadm.go:319]
I0111 08:19:28.023236 3354790 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I0111 08:19:28.023317 3354790 kubeadm.go:319] and service account keys on each node and then running the following as root:
I0111 08:19:28.023324 3354790 kubeadm.go:319]
I0111 08:19:28.023415 3354790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y49yks.7byb0evlqnwu15qk \
I0111 08:19:28.023523 3354790 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7fbdaf1f31f22210647da770a1c9ea2e312ca3de8444edfd85d94f45129ca0e7 \
I0111 08:19:28.023547 3354790 kubeadm.go:319] --control-plane
I0111 08:19:28.023555 3354790 kubeadm.go:319]
I0111 08:19:28.023639 3354790 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I0111 08:19:28.023647 3354790 kubeadm.go:319]
I0111 08:19:28.023729 3354790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y49yks.7byb0evlqnwu15qk \
I0111 08:19:28.023834 3354790 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7fbdaf1f31f22210647da770a1c9ea2e312ca3de8444edfd85d94f45129ca0e7
I0111 08:19:28.027094 3354790 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:19:28.027512 3354790 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:19:28.027623 3354790 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:19:28.027640 3354790 cni.go:84] Creating CNI manager for ""
I0111 08:19:28.027654 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0111 08:19:28.031273 3354790 out.go:179] * Configuring CNI (Container Networking Interface) ...
I0111 08:19:28.034237 3354790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0111 08:19:28.039981 3354790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I0111 08:19:28.040004 3354790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I0111 08:19:28.074803 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0111 08:19:28.414251 3354790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0111 08:19:28.414386 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:28.414488 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239792 minikube.k8s.io/updated_at=2026_01_11T08_19_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=embed-certs-239792 minikube.k8s.io/primary=true
I0111 08:19:28.584973 3354790 ops.go:34] apiserver oom_adj: -16
I0111 08:19:28.585084 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:29.085402 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:29.585215 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:30.085288 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:30.585504 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:31.085272 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:31.585902 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:32.085606 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:32.585666 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:19:32.686303 3354790 kubeadm.go:1114] duration metric: took 4.271967252s to wait for elevateKubeSystemPrivileges
I0111 08:19:32.686337 3354790 kubeadm.go:403] duration metric: took 17.594983028s to StartCluster
I0111 08:19:32.686354 3354790 settings.go:142] acquiring lock: {Name:mk941d920a0aafe770355773bf43dee753cabb3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:32.686419 3354790 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22402-3122619/kubeconfig
I0111 08:19:32.687507 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/kubeconfig: {Name:mk89d287b8f00e4766af7713066504256c0503e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:19:32.687744 3354790 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0111 08:19:32.687864 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0111 08:19:32.688112 3354790 config.go:182] Loaded profile config "embed-certs-239792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 08:19:32.688154 3354790 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0111 08:19:32.688219 3354790 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-239792"
I0111 08:19:32.688234 3354790 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-239792"
I0111 08:19:32.688260 3354790 host.go:66] Checking if "embed-certs-239792" exists ...
I0111 08:19:32.688494 3354790 addons.go:70] Setting default-storageclass=true in profile "embed-certs-239792"
I0111 08:19:32.688513 3354790 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239792"
I0111 08:19:32.689008 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:32.689127 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:32.691678 3354790 out.go:179] * Verifying Kubernetes components...
I0111 08:19:32.700525 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:19:32.724312 3354790 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0111 08:19:32.727620 3354790 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0111 08:19:32.727644 3354790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0111 08:19:32.727718 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:32.733858 3354790 addons.go:239] Setting addon default-storageclass=true in "embed-certs-239792"
I0111 08:19:32.733904 3354790 host.go:66] Checking if "embed-certs-239792" exists ...
I0111 08:19:32.734335 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
I0111 08:19:32.764430 3354790 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I0111 08:19:32.764450 3354790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0111 08:19:32.764513 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
I0111 08:19:32.787253 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:32.798340 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
I0111 08:19:33.023935 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0111 08:19:33.045094 3354790 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0111 08:19:33.079694 3354790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0111 08:19:33.096495 3354790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0111 08:19:33.457640 3354790 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I0111 08:19:33.459826 3354790 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239792" to be "Ready" ...
I0111 08:19:33.964362 3354790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-239792" context rescaled to 1 replicas
I0111 08:19:33.979828 3354790 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I0111 08:19:33.982623 3354790 addons.go:530] duration metric: took 1.294456351s for enable addons: enabled=[default-storageclass storage-provisioner]
W0111 08:19:35.462674 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
W0111 08:19:37.463204 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
W0111 08:19:39.963372 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
W0111 08:19:41.963855 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
W0111 08:19:43.963990 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
I0111 08:19:45.470437 3354790 node_ready.go:49] node "embed-certs-239792" is "Ready"
I0111 08:19:45.470475 3354790 node_ready.go:38] duration metric: took 12.010582077s for node "embed-certs-239792" to be "Ready" ...
I0111 08:19:45.470494 3354790 api_server.go:52] waiting for apiserver process to appear ...
I0111 08:19:45.470586 3354790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0111 08:19:45.520226 3354790 api_server.go:72] duration metric: took 12.832443479s to wait for apiserver process to appear ...
I0111 08:19:45.520258 3354790 api_server.go:88] waiting for apiserver healthz status ...
I0111 08:19:45.520338 3354790 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0111 08:19:45.536662 3354790 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
ok
I0111 08:19:45.548192 3354790 api_server.go:141] control plane version: v1.35.0
I0111 08:19:45.548225 3354790 api_server.go:131] duration metric: took 27.958128ms to wait for apiserver health ...
I0111 08:19:45.548234 3354790 system_pods.go:43] waiting for kube-system pods to appear ...
I0111 08:19:45.555121 3354790 system_pods.go:59] 8 kube-system pods found
I0111 08:19:45.555169 3354790 system_pods.go:61] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0111 08:19:45.555181 3354790 system_pods.go:61] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:45.555188 3354790 system_pods.go:61] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:45.555195 3354790 system_pods.go:61] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:45.555202 3354790 system_pods.go:61] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:45.555206 3354790 system_pods.go:61] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:45.555211 3354790 system_pods.go:61] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:45.555220 3354790 system_pods.go:61] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0111 08:19:45.555227 3354790 system_pods.go:74] duration metric: took 6.986863ms to wait for pod list to return data ...
I0111 08:19:45.555239 3354790 default_sa.go:34] waiting for default service account to be created ...
I0111 08:19:45.573583 3354790 default_sa.go:45] found service account: "default"
I0111 08:19:45.573617 3354790 default_sa.go:55] duration metric: took 18.371036ms for default service account to be created ...
I0111 08:19:45.573631 3354790 system_pods.go:116] waiting for k8s-apps to be running ...
I0111 08:19:45.584573 3354790 system_pods.go:86] 8 kube-system pods found
I0111 08:19:45.584615 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0111 08:19:45.584625 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:45.584632 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:45.584641 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:45.584655 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:45.584669 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:45.584682 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:45.584688 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0111 08:19:45.584725 3354790 retry.go:84] will retry after 300ms: missing components: kube-dns
I0111 08:19:45.849880 3354790 system_pods.go:86] 8 kube-system pods found
I0111 08:19:45.849918 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0111 08:19:45.849927 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:45.849942 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:45.849949 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:45.849954 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:45.849967 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:45.849978 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:45.849985 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0111 08:19:46.154338 3354790 system_pods.go:86] 8 kube-system pods found
I0111 08:19:46.154380 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0111 08:19:46.154392 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:46.154398 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:46.154404 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:46.154410 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:46.154415 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:46.154420 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:46.154425 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0111 08:19:46.623537 3354790 system_pods.go:86] 8 kube-system pods found
I0111 08:19:46.623576 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0111 08:19:46.623585 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:46.623592 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:46.623597 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:46.623603 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:46.623653 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:46.623659 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:46.623665 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0111 08:19:47.175907 3354790 system_pods.go:86] 8 kube-system pods found
I0111 08:19:47.175954 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Running
I0111 08:19:47.175967 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0111 08:19:47.175973 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
I0111 08:19:47.175982 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
I0111 08:19:47.175990 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
I0111 08:19:47.175995 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
I0111 08:19:47.176000 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
I0111 08:19:47.176006 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Running
I0111 08:19:47.176019 3354790 system_pods.go:126] duration metric: took 1.602374511s to wait for k8s-apps to be running ...
I0111 08:19:47.176030 3354790 system_svc.go:44] waiting for kubelet service to be running ....
I0111 08:19:47.176090 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0111 08:19:47.189104 3354790 system_svc.go:56] duration metric: took 13.062731ms WaitForService to wait for kubelet
I0111 08:19:47.189173 3354790 kubeadm.go:587] duration metric: took 14.501396628s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0111 08:19:47.189201 3354790 node_conditions.go:102] verifying NodePressure condition ...
I0111 08:19:47.192452 3354790 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0111 08:19:47.192487 3354790 node_conditions.go:123] node cpu capacity is 2
I0111 08:19:47.192501 3354790 node_conditions.go:105] duration metric: took 3.293604ms to run NodePressure ...
I0111 08:19:47.192513 3354790 start.go:242] waiting for startup goroutines ...
I0111 08:19:47.192521 3354790 start.go:247] waiting for cluster config update ...
I0111 08:19:47.192532 3354790 start.go:256] writing updated cluster config ...
I0111 08:19:47.192844 3354790 ssh_runner.go:195] Run: rm -f paused
I0111 08:19:47.196266 3354790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0111 08:19:47.199715 3354790 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xpszs" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.204562 3354790 pod_ready.go:94] pod "coredns-7d764666f9-xpszs" is "Ready"
I0111 08:19:47.204593 3354790 pod_ready.go:86] duration metric: took 4.838128ms for pod "coredns-7d764666f9-xpszs" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.206860 3354790 pod_ready.go:83] waiting for pod "etcd-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.712565 3354790 pod_ready.go:94] pod "etcd-embed-certs-239792" is "Ready"
I0111 08:19:47.712594 3354790 pod_ready.go:86] duration metric: took 505.707766ms for pod "etcd-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.715109 3354790 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.719762 3354790 pod_ready.go:94] pod "kube-apiserver-embed-certs-239792" is "Ready"
I0111 08:19:47.719833 3354790 pod_ready.go:86] duration metric: took 4.698357ms for pod "kube-apiserver-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:47.722413 3354790 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:48.001867 3354790 pod_ready.go:94] pod "kube-controller-manager-embed-certs-239792" is "Ready"
I0111 08:19:48.001895 3354790 pod_ready.go:86] duration metric: took 279.417712ms for pod "kube-controller-manager-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:48.201376 3354790 pod_ready.go:83] waiting for pod "kube-proxy-8tlw4" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:48.600582 3354790 pod_ready.go:94] pod "kube-proxy-8tlw4" is "Ready"
I0111 08:19:48.600610 3354790 pod_ready.go:86] duration metric: took 399.202398ms for pod "kube-proxy-8tlw4" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:48.800834 3354790 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:49.200339 3354790 pod_ready.go:94] pod "kube-scheduler-embed-certs-239792" is "Ready"
I0111 08:19:49.200367 3354790 pod_ready.go:86] duration metric: took 399.502379ms for pod "kube-scheduler-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
I0111 08:19:49.200379 3354790 pod_ready.go:40] duration metric: took 2.004053074s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0111 08:19:49.257862 3354790 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I0111 08:19:49.261264 3354790 out.go:203]
W0111 08:19:49.264163 3354790 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
I0111 08:19:49.267031 3354790 out.go:179] - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
I0111 08:19:49.270127 3354790 out.go:179] * Done! kubectl is now configured to use "embed-certs-239792" cluster and "default" namespace by default
I0111 08:20:06.987125 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117732s
I0111 08:20:06.987156 3329885 kubeadm.go:319]
I0111 08:20:06.987538 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:20:06.987650 3329885 kubeadm.go:319] - The kubelet is not running
I0111 08:20:06.987914 3329885 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:20:06.987923 3329885 kubeadm.go:319]
I0111 08:20:06.988383 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:20:06.988449 3329885 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:20:06.988619 3329885 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0111 08:20:06.988627 3329885 kubeadm.go:319]
I0111 08:20:06.994037 3329885 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:20:06.994459 3329885 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:20:06.994571 3329885 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:20:06.994810 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I0111 08:20:06.994819 3329885 kubeadm.go:319]
I0111 08:20:06.994887 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0111 08:20:06.994944 3329885 kubeadm.go:403] duration metric: took 8m7.611643479s to StartCluster
I0111 08:20:06.994981 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0111 08:20:06.995043 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0111 08:20:07.022617 3329885 cri.go:96] found id: ""
I0111 08:20:07.022699 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.022717 3329885 logs.go:284] No container was found matching "kube-apiserver"
I0111 08:20:07.022724 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0111 08:20:07.022804 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0111 08:20:07.050589 3329885 cri.go:96] found id: ""
I0111 08:20:07.050614 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.050623 3329885 logs.go:284] No container was found matching "etcd"
I0111 08:20:07.050629 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0111 08:20:07.050713 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0111 08:20:07.076582 3329885 cri.go:96] found id: ""
I0111 08:20:07.076608 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.076618 3329885 logs.go:284] No container was found matching "coredns"
I0111 08:20:07.076625 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0111 08:20:07.076719 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0111 08:20:07.103212 3329885 cri.go:96] found id: ""
I0111 08:20:07.103238 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.103247 3329885 logs.go:284] No container was found matching "kube-scheduler"
I0111 08:20:07.103254 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0111 08:20:07.103318 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0111 08:20:07.129632 3329885 cri.go:96] found id: ""
I0111 08:20:07.129709 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.129733 3329885 logs.go:284] No container was found matching "kube-proxy"
I0111 08:20:07.129744 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0111 08:20:07.129817 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0111 08:20:07.155361 3329885 cri.go:96] found id: ""
I0111 08:20:07.155388 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.155397 3329885 logs.go:284] No container was found matching "kube-controller-manager"
I0111 08:20:07.155404 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0111 08:20:07.155466 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0111 08:20:07.180698 3329885 cri.go:96] found id: ""
I0111 08:20:07.180793 3329885 logs.go:282] 0 containers: []
W0111 08:20:07.180810 3329885 logs.go:284] No container was found matching "kindnet"
I0111 08:20:07.180822 3329885 logs.go:123] Gathering logs for kubelet ...
I0111 08:20:07.180834 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0111 08:20:07.237611 3329885 logs.go:123] Gathering logs for dmesg ...
I0111 08:20:07.237644 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0111 08:20:07.252588 3329885 logs.go:123] Gathering logs for describe nodes ...
I0111 08:20:07.252615 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0111 08:20:07.317178 3329885 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:20:07.309153 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.309712 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311194 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311610 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.313064 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0111 08:20:07.309153 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.309712 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311194 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.311610 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:07.313064 4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0111 08:20:07.317197 3329885 logs.go:123] Gathering logs for containerd ...
I0111 08:20:07.317211 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0111 08:20:07.357000 3329885 logs.go:123] Gathering logs for container status ...
I0111 08:20:07.357043 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0111 08:20:07.386881 3329885 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.386931 3329885 out.go:285] *
W0111 08:20:07.386981 3329885 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.386998 3329885 out.go:285] *
W0111 08:20:07.387249 3329885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0111 08:20:07.394169 3329885 out.go:203]
W0111 08:20:07.397066 3329885 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001117732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W0111 08:20:07.397111 3329885 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0111 08:20:07.397136 3329885 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0111 08:20:07.400215 3329885 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106043698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106059312Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106107130Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106123294Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106140902Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106151281Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106160692Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106179998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106198976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106229408Z" level=info msg="Connect containerd service"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106524622Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.107079858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.127986290Z" level=info msg="Start subscribing containerd event"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128066936Z" level=info msg="Start recovering state"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128760385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128973590Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164185276Z" level=info msg="Start event monitor"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164408270Z" level=info msg="Start cni network conf syncer for default"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164484469Z" level=info msg="Start streaming server"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164546376Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164601825Z" level=info msg="runtime interface starting up..."
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164657019Z" level=info msg="starting plugins..."
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164718121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Jan 11 08:11:57 force-systemd-flag-610060 systemd[1]: Started containerd.service - containerd container runtime.
Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.166571568Z" level=info msg="containerd successfully booted in 0.081008s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:20:08.752548 4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:08.753113 4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:08.754736 4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:08.755263 4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:20:08.757023 4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Jan11 07:19] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Jan11 07:25] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
08:20:08 up 14:02, 0 user, load average: 1.81, 1.90, 2.03
Linux force-systemd-flag-610060 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Jan 11 08:20:05 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 317.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:06 force-systemd-flag-610060 kubelet[4745]: E0111 08:20:06.092362 4745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:06 force-systemd-flag-610060 kubelet[4750]: E0111 08:20:06.839632 4750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:07 force-systemd-flag-610060 kubelet[4838]: E0111 08:20:07.612139 4838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:20:08 force-systemd-flag-610060 kubelet[4866]: E0111 08:20:08.354429 4866 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-610060 -n force-systemd-flag-610060
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-610060 -n force-systemd-flag-610060: exit status 6 (334.83814ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0111 08:20:09.200499 3359135 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-610060" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-610060" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-610060" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-610060
E0111 08:20:10.007247 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-610060: (1.969947542s)
--- FAIL: TestForceSystemdFlag (505.26s)