=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E1228 07:06:45.222391 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:08:30.709583 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.082531 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.087956 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.098317 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.118679 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.158992 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.239525 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.400033 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.720678 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:16.361538 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:17.641827 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:20.202067 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:25.322413 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:35.563204 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:56.043490 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:10:27.660473 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:10:37.004361 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:11:45.223721 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:11:58.924666 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:14:15.082663 4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m22.801297488s)
-- stdout --
* [force-systemd-flag-649810] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22352
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-649810" primary control-plane node in "force-systemd-flag-649810" cluster
* Pulling base image v0.0.48-1766884053-22351 ...
-- /stdout --
** stderr **
I1228 07:05:59.908381 226337 out.go:360] Setting OutFile to fd 1 ...
I1228 07:05:59.908505 226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:05:59.908511 226337 out.go:374] Setting ErrFile to fd 2...
I1228 07:05:59.908515 226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:05:59.908870 226337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 07:05:59.909316 226337 out.go:368] Setting JSON to false
I1228 07:05:59.911797 226337 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2909,"bootTime":1766902651,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1228 07:05:59.911876 226337 start.go:143] virtualization:
I1228 07:05:59.916290 226337 out.go:179] * [force-systemd-flag-649810] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1228 07:05:59.920371 226337 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:05:59.920602 226337 notify.go:221] Checking for updates...
I1228 07:05:59.930233 226337 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:05:59.933365 226337 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
I1228 07:05:59.936750 226337 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
I1228 07:05:59.939782 226337 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1228 07:05:59.943014 226337 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:05:59.946329 226337 config.go:182] Loaded profile config "force-systemd-env-475689": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 07:05:59.946460 226337 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:05:59.989184 226337 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1228 07:05:59.989298 226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:06:00.147313 226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.132170795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:06:00.147445 226337 docker.go:319] overlay module found
I1228 07:06:00.151222 226337 out.go:179] * Using the docker driver based on user configuration
I1228 07:06:00.154382 226337 start.go:309] selected driver: docker
I1228 07:06:00.154405 226337 start.go:928] validating driver "docker" against <nil>
I1228 07:06:00.154420 226337 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:06:00.155281 226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:06:00.370917 226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.355782962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:06:00.371072 226337 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1228 07:06:00.371298 226337 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1228 07:06:00.374992 226337 out.go:179] * Using Docker driver with root privileges
I1228 07:06:00.377986 226337 cni.go:84] Creating CNI manager for ""
I1228 07:06:00.378068 226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:06:00.378082 226337 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1228 07:06:00.378160 226337 start.go:353] cluster config:
{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:06:00.381388 226337 out.go:179] * Starting "force-systemd-flag-649810" primary control-plane node in "force-systemd-flag-649810" cluster
I1228 07:06:00.384286 226337 cache.go:134] Beginning downloading kic base image for docker with docker
I1228 07:06:00.387356 226337 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:06:00.390388 226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:06:00.390453 226337 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1228 07:06:00.390466 226337 cache.go:65] Caching tarball of preloaded images
I1228 07:06:00.390569 226337 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1228 07:06:00.390579 226337 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1228 07:06:00.390710 226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
I1228 07:06:00.390748 226337 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:06:00.390743 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json: {Name:mkcc4924bc7430bc738783d3bc1ceb8a9cf9dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:00.418489 226337 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:06:00.418520 226337 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:06:00.418536 226337 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:06:00.418570 226337 start.go:360] acquireMachinesLock for force-systemd-flag-649810: {Name:mka57d38f56a82b4b8389b88f726a058fa795922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:06:00.418691 226337 start.go:364] duration metric: took 104.256µs to acquireMachinesLock for "force-systemd-flag-649810"
I1228 07:06:00.418719 226337 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1228 07:06:00.418813 226337 start.go:125] createHost starting for "" (driver="docker")
I1228 07:06:00.426510 226337 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1228 07:06:00.426912 226337 start.go:159] libmachine.API.Create for "force-systemd-flag-649810" (driver="docker")
I1228 07:06:00.426990 226337 client.go:173] LocalClient.Create starting
I1228 07:06:00.427147 226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem
I1228 07:06:00.427225 226337 main.go:144] libmachine: Decoding PEM data...
I1228 07:06:00.427273 226337 main.go:144] libmachine: Parsing certificate...
I1228 07:06:00.427370 226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem
I1228 07:06:00.427427 226337 main.go:144] libmachine: Decoding PEM data...
I1228 07:06:00.427455 226337 main.go:144] libmachine: Parsing certificate...
I1228 07:06:00.428427 226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 07:06:00.447890 226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 07:06:00.447979 226337 network_create.go:284] running [docker network inspect force-systemd-flag-649810] to gather additional debugging logs...
I1228 07:06:00.448135 226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810
W1228 07:06:00.469959 226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 returned with exit code 1
I1228 07:06:00.469990 226337 network_create.go:287] error running [docker network inspect force-systemd-flag-649810]: docker network inspect force-systemd-flag-649810: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-649810 not found
I1228 07:06:00.470003 226337 network_create.go:289] output of [docker network inspect force-systemd-flag-649810]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-649810 not found
** /stderr **
I1228 07:06:00.470126 226337 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:06:00.492500 226337 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e663f46973f0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e5:53:aa:f4:ad} reservation:<nil>}
I1228 07:06:00.492943 226337 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad53498571c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:ea:8c:9a:c6:5d} reservation:<nil>}
I1228 07:06:00.493252 226337 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b73d9f306bb6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:7e:31:bd:ea:20} reservation:<nil>}
I1228 07:06:00.493666 226337 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197fcd0}
I1228 07:06:00.493683 226337 network_create.go:124] attempt to create docker network force-systemd-flag-649810 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1228 07:06:00.493748 226337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-649810 force-systemd-flag-649810
I1228 07:06:00.576344 226337 network_create.go:108] docker network force-systemd-flag-649810 192.168.76.0/24 created
I1228 07:06:00.576376 226337 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-649810" container
I1228 07:06:00.576446 226337 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1228 07:06:00.593986 226337 cli_runner.go:164] Run: docker volume create force-systemd-flag-649810 --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true
I1228 07:06:00.616436 226337 oci.go:103] Successfully created a docker volume force-systemd-flag-649810
I1228 07:06:00.616534 226337 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-649810-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --entrypoint /usr/bin/test -v force-systemd-flag-649810:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
I1228 07:06:01.192754 226337 oci.go:107] Successfully prepared a docker volume force-systemd-flag-649810
I1228 07:06:01.192824 226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:06:01.192841 226337 kic.go:194] Starting extracting preloaded images to volume ...
I1228 07:06:01.192909 226337 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
I1228 07:06:04.534678 226337 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.341730036s)
I1228 07:06:04.534706 226337 kic.go:203] duration metric: took 3.341862567s to extract preloaded images to volume ...
W1228 07:06:04.534846 226337 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1228 07:06:04.534950 226337 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1228 07:06:04.616424 226337 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-649810 --name force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-649810 --network force-systemd-flag-649810 --ip 192.168.76.2 --volume force-systemd-flag-649810:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
I1228 07:06:05.050915 226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Running}}
I1228 07:06:05.083775 226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
I1228 07:06:05.121760 226337 cli_runner.go:164] Run: docker exec force-systemd-flag-649810 stat /var/lib/dpkg/alternatives/iptables
I1228 07:06:05.195792 226337 oci.go:144] the created container "force-systemd-flag-649810" has a running status.
I1228 07:06:05.195838 226337 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa...
I1228 07:06:05.653918 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1228 07:06:05.653967 226337 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1228 07:06:05.681329 226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
I1228 07:06:05.716468 226337 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1228 07:06:05.716494 226337 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-649810 chown docker:docker /home/docker/.ssh/authorized_keys]
I1228 07:06:05.794934 226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
I1228 07:06:05.820643 226337 machine.go:94] provisionDockerMachine start ...
I1228 07:06:05.820722 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:05.844435 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:05.845616 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:05.845648 226337 main.go:144] libmachine: About to run SSH command:
hostname
I1228 07:06:05.846196 226337 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38420->127.0.0.1:32999: read: connection reset by peer
I1228 07:06:08.999828 226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
I1228 07:06:08.999858 226337 ubuntu.go:182] provisioning hostname "force-systemd-flag-649810"
I1228 07:06:08.999919 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:09.037969 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:09.038372 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:09.038392 226337 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-649810 && echo "force-systemd-flag-649810" | sudo tee /etc/hostname
I1228 07:06:09.203095 226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
I1228 07:06:09.203197 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:09.226561 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:09.226886 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:09.226912 226337 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-649810' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-649810/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-649810' | sudo tee -a /etc/hosts;
fi
fi
I1228 07:06:09.376692 226337 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1228 07:06:09.376727 226337 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2382/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2382/.minikube}
I1228 07:06:09.376753 226337 ubuntu.go:190] setting up certificates
I1228 07:06:09.376763 226337 provision.go:84] configureAuth start
I1228 07:06:09.376841 226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
I1228 07:06:09.402227 226337 provision.go:143] copyHostCerts
I1228 07:06:09.402278 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
I1228 07:06:09.402318 226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem, removing ...
I1228 07:06:09.402325 226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
I1228 07:06:09.402409 226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem (1082 bytes)
I1228 07:06:09.402515 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
I1228 07:06:09.402540 226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem, removing ...
I1228 07:06:09.402545 226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
I1228 07:06:09.402581 226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem (1123 bytes)
I1228 07:06:09.402643 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
I1228 07:06:09.402664 226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem, removing ...
I1228 07:06:09.402677 226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
I1228 07:06:09.402711 226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem (1675 bytes)
I1228 07:06:09.402788 226337 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-649810 san=[127.0.0.1 192.168.76.2 force-systemd-flag-649810 localhost minikube]
I1228 07:06:09.752728 226337 provision.go:177] copyRemoteCerts
I1228 07:06:09.752940 226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1228 07:06:09.753068 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:09.785834 226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
I1228 07:06:09.921106 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1228 07:06:09.921170 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1228 07:06:09.942067 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem -> /etc/docker/server.pem
I1228 07:06:09.942130 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1228 07:06:09.962397 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1228 07:06:09.962470 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1228 07:06:09.983245 226337 provision.go:87] duration metric: took 606.461413ms to configureAuth
I1228 07:06:09.983284 226337 ubuntu.go:206] setting minikube options for container-runtime
I1228 07:06:09.983486 226337 config.go:182] Loaded profile config "force-systemd-flag-649810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 07:06:09.983556 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:10.018234 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:10.018571 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:10.018580 226337 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1228 07:06:10.171532 226337 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1228 07:06:10.171556 226337 ubuntu.go:71] root file system type: overlay
I1228 07:06:10.171677 226337 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1228 07:06:10.171764 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:10.196947 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:10.197266 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:10.197352 226337 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1228 07:06:10.359758 226337 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1228 07:06:10.359847 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:10.387300 226337 main.go:144] libmachine: Using SSH client type: native
I1228 07:06:10.387769 226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 32999 <nil> <nil>}
I1228 07:06:10.387790 226337 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1228 07:06:11.609721 226337 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-28 07:06:10.353229981 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1228 07:06:11.609760 226337 machine.go:97] duration metric: took 5.78909378s to provisionDockerMachine
I1228 07:06:11.609782 226337 client.go:176] duration metric: took 11.182753053s to LocalClient.Create
I1228 07:06:11.609802 226337 start.go:167] duration metric: took 11.182887652s to libmachine.API.Create "force-systemd-flag-649810"
I1228 07:06:11.609811 226337 start.go:293] postStartSetup for "force-systemd-flag-649810" (driver="docker")
I1228 07:06:11.609821 226337 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1228 07:06:11.609893 226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1228 07:06:11.609934 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:11.637109 226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
I1228 07:06:11.737067 226337 ssh_runner.go:195] Run: cat /etc/os-release
I1228 07:06:11.740612 226337 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1228 07:06:11.740643 226337 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1228 07:06:11.740655 226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/addons for local assets ...
I1228 07:06:11.740714 226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/files for local assets ...
I1228 07:06:11.740797 226337 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> 42022.pem in /etc/ssl/certs
I1228 07:06:11.740808 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /etc/ssl/certs/42022.pem
I1228 07:06:11.740908 226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1228 07:06:11.750293 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /etc/ssl/certs/42022.pem (1708 bytes)
I1228 07:06:11.773152 226337 start.go:296] duration metric: took 163.328024ms for postStartSetup
I1228 07:06:11.773520 226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
I1228 07:06:11.810119 226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
I1228 07:06:11.810475 226337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1228 07:06:11.810541 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:11.832046 226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
I1228 07:06:11.938437 226337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1228 07:06:11.944396 226337 start.go:128] duration metric: took 11.525566626s to createHost
I1228 07:06:11.944421 226337 start.go:83] releasing machines lock for "force-systemd-flag-649810", held for 11.52572031s
I1228 07:06:11.944491 226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
I1228 07:06:11.969414 226337 ssh_runner.go:195] Run: cat /version.json
I1228 07:06:11.969478 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:11.969777 226337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1228 07:06:11.969830 226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
I1228 07:06:12.000799 226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
I1228 07:06:12.017933 226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
I1228 07:06:12.132244 226337 ssh_runner.go:195] Run: systemctl --version
I1228 07:06:12.231712 226337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1228 07:06:12.236758 226337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1228 07:06:12.236869 226337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1228 07:06:12.267687 226337 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1228 07:06:12.267755 226337 start.go:496] detecting cgroup driver to use...
I1228 07:06:12.267782 226337 start.go:500] using "systemd" cgroup driver as enforced via flags
I1228 07:06:12.267953 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1228 07:06:12.283188 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1228 07:06:12.293095 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1228 07:06:12.304428 226337 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1228 07:06:12.304533 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1228 07:06:12.313854 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:06:12.323205 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1228 07:06:12.332643 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:06:12.341934 226337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1228 07:06:12.350791 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1228 07:06:12.360609 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1228 07:06:12.369833 226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1228 07:06:12.379802 226337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1228 07:06:12.388095 226337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1228 07:06:12.396058 226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:06:12.536042 226337 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1228 07:06:12.674161 226337 start.go:496] detecting cgroup driver to use...
I1228 07:06:12.674237 226337 start.go:500] using "systemd" cgroup driver as enforced via flags
I1228 07:06:12.674325 226337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1228 07:06:12.699050 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1228 07:06:12.712858 226337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1228 07:06:12.751092 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1228 07:06:12.769844 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1228 07:06:12.792531 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1228 07:06:12.815446 226337 ssh_runner.go:195] Run: which cri-dockerd
I1228 07:06:12.819518 226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1228 07:06:12.829311 226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1228 07:06:12.845032 226337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1228 07:06:12.989013 226337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1228 07:06:13.140533 226337 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1228 07:06:13.140637 226337 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1228 07:06:13.157163 226337 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1228 07:06:13.171809 226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:06:13.314373 226337 ssh_runner.go:195] Run: sudo systemctl restart docker
I1228 07:06:13.798806 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1228 07:06:13.813917 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1228 07:06:13.829978 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1228 07:06:13.845472 226337 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1228 07:06:13.990757 226337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1228 07:06:14.139338 226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:06:14.291076 226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1228 07:06:14.316287 226337 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1228 07:06:14.331768 226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:06:14.475844 226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1228 07:06:14.562973 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1228 07:06:14.582946 226337 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1228 07:06:14.583063 226337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1228 07:06:14.587245 226337 start.go:574] Will wait 60s for crictl version
I1228 07:06:14.587308 226337 ssh_runner.go:195] Run: which crictl
I1228 07:06:14.591035 226337 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1228 07:06:14.619000 226337 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1228 07:06:14.619117 226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1228 07:06:14.654802 226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1228 07:06:14.681742 226337 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1228 07:06:14.681898 226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:06:14.699922 226337 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1228 07:06:14.704031 226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:06:14.718757 226337 kubeadm.go:884] updating cluster {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1228 07:06:14.718869 226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:06:14.718923 226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1228 07:06:14.738067 226337 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1228 07:06:14.738094 226337 docker.go:624] Images already preloaded, skipping extraction
I1228 07:06:14.738159 226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1228 07:06:14.765792 226337 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1228 07:06:14.765815 226337 cache_images.go:86] Images are preloaded, skipping loading
I1228 07:06:14.765825 226337 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
I1228 07:06:14.765924 226337 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-649810 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1228 07:06:14.766001 226337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1228 07:06:14.828478 226337 cni.go:84] Creating CNI manager for ""
I1228 07:06:14.828557 226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:06:14.828591 226337 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1228 07:06:14.828637 226337 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-649810 NodeName:force-systemd-flag-649810 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1228 07:06:14.828791 226337 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-649810"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1228 07:06:14.828879 226337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1228 07:06:14.837993 226337 binaries.go:51] Found k8s binaries, skipping transfer
I1228 07:06:14.838058 226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1228 07:06:14.846918 226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1228 07:06:14.862481 226337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1228 07:06:14.877609 226337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1228 07:06:14.893026 226337 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1228 07:06:14.897112 226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:06:14.908147 226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:06:15.112938 226337 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1228 07:06:15.149385 226337 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810 for IP: 192.168.76.2
I1228 07:06:15.149408 226337 certs.go:195] generating shared ca certs ...
I1228 07:06:15.149425 226337 certs.go:227] acquiring lock for ca certs: {Name:mkb08779780dcf6b96f2c93a4ec9c28968a3dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:15.149572 226337 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key
I1228 07:06:15.149628 226337 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key
I1228 07:06:15.149636 226337 certs.go:257] generating profile certs ...
I1228 07:06:15.149691 226337 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key
I1228 07:06:15.149702 226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt with IP's: []
I1228 07:06:15.327648 226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt ...
I1228 07:06:15.327721 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt: {Name:mkf75acb8f7153fe0d0255b564acb6149af2fb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:15.327938 226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key ...
I1228 07:06:15.327982 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key: {Name:mk51f561ed38ca116434114e1f62874070b9255b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:15.328119 226337 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1
I1228 07:06:15.328164 226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1228 07:06:15.764980 226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 ...
I1228 07:06:15.765013 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1: {Name:mk91fced1432c5d7a2938e5f8f1f25ea86d8f5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:15.765212 226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 ...
I1228 07:06:15.765227 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1: {Name:mk1a153a4b0a803bdf2ccf3b1ffb3b75a611c21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:15.765314 226337 certs.go:382] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt
I1228 07:06:15.765393 226337 certs.go:386] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key
I1228 07:06:15.765455 226337 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key
I1228 07:06:15.765467 226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt with IP's: []
I1228 07:06:16.054118 226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt ...
I1228 07:06:16.054154 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt: {Name:mk48c2c2ab804522bc505c3ba557fdae87d36100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:16.054331 226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key ...
I1228 07:06:16.054347 226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key: {Name:mk69a54a58808c1b19f454fc1eed5065bebd15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:06:16.054418 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1228 07:06:16.054445 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1228 07:06:16.054466 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1228 07:06:16.054482 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1228 07:06:16.054500 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1228 07:06:16.054517 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1228 07:06:16.054529 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1228 07:06:16.054543 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1228 07:06:16.054593 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem (1338 bytes)
W1228 07:06:16.054636 226337 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202_empty.pem, impossibly tiny 0 bytes
I1228 07:06:16.054649 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem (1679 bytes)
I1228 07:06:16.054677 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem (1082 bytes)
I1228 07:06:16.054705 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem (1123 bytes)
I1228 07:06:16.054746 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem (1675 bytes)
I1228 07:06:16.054797 226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem (1708 bytes)
I1228 07:06:16.054833 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1228 07:06:16.054850 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem -> /usr/share/ca-certificates/4202.pem
I1228 07:06:16.054861 226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /usr/share/ca-certificates/42022.pem
I1228 07:06:16.055446 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1228 07:06:16.078627 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1228 07:06:16.098321 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1228 07:06:16.116670 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1228 07:06:16.134710 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1228 07:06:16.152509 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1228 07:06:16.170879 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1228 07:06:16.188865 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1228 07:06:16.206838 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1228 07:06:16.226258 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem --> /usr/share/ca-certificates/4202.pem (1338 bytes)
I1228 07:06:16.245780 226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /usr/share/ca-certificates/42022.pem (1708 bytes)
I1228 07:06:16.268659 226337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1228 07:06:16.283601 226337 ssh_runner.go:195] Run: openssl version
I1228 07:06:16.290585 226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1228 07:06:16.299195 226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1228 07:06:16.307118 226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1228 07:06:16.310841 226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
I1228 07:06:16.310916 226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1228 07:06:16.352064 226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1228 07:06:16.359487 226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1228 07:06:16.366859 226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4202.pem
I1228 07:06:16.374261 226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4202.pem /etc/ssl/certs/4202.pem
I1228 07:06:16.381698 226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4202.pem
I1228 07:06:16.385388 226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4202.pem
I1228 07:06:16.385461 226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4202.pem
I1228 07:06:16.426915 226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1228 07:06:16.434366 226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4202.pem /etc/ssl/certs/51391683.0
I1228 07:06:16.441642 226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42022.pem
I1228 07:06:16.449184 226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42022.pem /etc/ssl/certs/42022.pem
I1228 07:06:16.456957 226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42022.pem
I1228 07:06:16.460669 226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/42022.pem
I1228 07:06:16.460736 226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42022.pem
I1228 07:06:16.501782 226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1228 07:06:16.509722 226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42022.pem /etc/ssl/certs/3ec20f2e.0
I1228 07:06:16.517199 226337 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1228 07:06:16.520925 226337 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1228 07:06:16.520999 226337 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:06:16.521142 226337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1228 07:06:16.537329 226337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1228 07:06:16.545115 226337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1228 07:06:16.552764 226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:06:16.552877 226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:06:16.560792 226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:06:16.560864 226337 kubeadm.go:158] found existing configuration files:
I1228 07:06:16.560941 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:06:16.568352 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:06:16.568441 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:06:16.575993 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:06:16.583437 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:06:16.583546 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:06:16.590681 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:06:16.598093 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:06:16.598202 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:06:16.605574 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:06:16.613280 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:06:16.613396 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:06:16.620636 226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:06:16.661468 226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:06:16.661712 226337 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:06:16.758679 226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:06:16.758779 226337 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:06:16.758849 226337 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:06:16.758929 226337 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:06:16.759009 226337 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:06:16.759089 226337 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:06:16.759163 226337 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:06:16.759245 226337 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:06:16.759325 226337 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:06:16.759389 226337 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:06:16.759482 226337 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:06:16.759553 226337 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:06:16.835436 226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:06:16.835601 226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:06:16.835720 226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:06:16.852604 226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:06:16.858977 226337 out.go:252] - Generating certificates and keys ...
I1228 07:06:16.859071 226337 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:06:16.859148 226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:06:16.922161 226337 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1228 07:06:17.011768 226337 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1228 07:06:17.090969 226337 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1228 07:06:17.253680 226337 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1228 07:06:17.439963 226337 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1228 07:06:17.440300 226337 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1228 07:06:17.731890 226337 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1228 07:06:17.732248 226337 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1228 07:06:18.395961 226337 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1228 07:06:18.651951 226337 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1228 07:06:18.929995 226337 kubeadm.go:319] [certs] Generating "sa" key and public key
I1228 07:06:18.930273 226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:06:19.098124 226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:06:19.475849 226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:06:19.685709 226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:06:20.030601 226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:06:20.108979 226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:06:20.109747 226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:06:20.112581 226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:06:20.115925 226337 out.go:252] - Booting up control plane ...
I1228 07:06:20.116029 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:06:20.116107 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:06:20.116173 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:06:20.131690 226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:06:20.131807 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:06:20.142201 226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:06:20.142570 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:06:20.142817 226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:06:20.280684 226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:06:20.280818 226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:10:20.275741 226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182866s
I1228 07:10:20.275772 226337 kubeadm.go:319]
I1228 07:10:20.275828 226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:10:20.275861 226337 kubeadm.go:319] - The kubelet is not running
I1228 07:10:20.275960 226337 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:10:20.275966 226337 kubeadm.go:319]
I1228 07:10:20.276064 226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:10:20.276095 226337 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:10:20.276124 226337 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:10:20.276128 226337 kubeadm.go:319]
I1228 07:10:20.279666 226337 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:10:20.280132 226337 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:10:20.280283 226337 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:10:20.280566 226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1228 07:10:20.280582 226337 kubeadm.go:319]
I1228 07:10:20.280656 226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1228 07:10:20.280798 226337 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001182866s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001182866s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1228 07:10:20.280887 226337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1228 07:10:20.708651 226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1228 07:10:20.722039 226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:10:20.722109 226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:10:20.730359 226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:10:20.730423 226337 kubeadm.go:158] found existing configuration files:
I1228 07:10:20.730491 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:10:20.738525 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:10:20.738593 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:10:20.746327 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:10:20.754111 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:10:20.754179 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:10:20.761709 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:10:20.769442 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:10:20.769505 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:10:20.777179 226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:10:20.785378 226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:10:20.785469 226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:10:20.793339 226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:10:20.906011 226337 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:10:20.906414 226337 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:10:20.974641 226337 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:14:22.104356 226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1228 07:14:22.104394 226337 kubeadm.go:319]
I1228 07:14:22.104466 226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:14:22.105084 226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:14:22.105135 226337 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:14:22.105225 226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:14:22.105279 226337 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:14:22.105313 226337 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:14:22.105359 226337 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:14:22.105408 226337 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:14:22.105455 226337 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:14:22.105503 226337 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:14:22.105551 226337 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:14:22.105600 226337 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:14:22.105645 226337 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:14:22.105693 226337 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:14:22.105739 226337 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:14:22.105812 226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:14:22.105907 226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:14:22.105996 226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:14:22.106058 226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:14:22.109591 226337 out.go:252] - Generating certificates and keys ...
I1228 07:14:22.109685 226337 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:14:22.109750 226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:14:22.109825 226337 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1228 07:14:22.109891 226337 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1228 07:14:22.109960 226337 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1228 07:14:22.110013 226337 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1228 07:14:22.110076 226337 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1228 07:14:22.110138 226337 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1228 07:14:22.110212 226337 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1228 07:14:22.110285 226337 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1228 07:14:22.110323 226337 kubeadm.go:319] [certs] Using the existing "sa" key
I1228 07:14:22.110393 226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:14:22.110444 226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:14:22.110501 226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:14:22.110554 226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:14:22.110617 226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:14:22.110671 226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:14:22.110755 226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:14:22.110820 226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:14:22.113628 226337 out.go:252] - Booting up control plane ...
I1228 07:14:22.113810 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:14:22.113959 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:14:22.114042 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:14:22.114156 226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:14:22.114258 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:14:22.114370 226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:14:22.114461 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:14:22.114503 226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:14:22.114643 226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:14:22.114755 226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:14:22.114825 226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000416054s
I1228 07:14:22.114829 226337 kubeadm.go:319]
I1228 07:14:22.114889 226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:14:22.114944 226337 kubeadm.go:319] - The kubelet is not running
I1228 07:14:22.115058 226337 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:14:22.115063 226337 kubeadm.go:319]
I1228 07:14:22.115176 226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:14:22.115211 226337 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:14:22.115243 226337 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:14:22.115303 226337 kubeadm.go:403] duration metric: took 8m5.594328388s to StartCluster
I1228 07:14:22.115380 226337 ssh_runner.go:195] Run: sudo runc list -f json
I1228 07:14:22.115458 226337 kubeadm.go:319]
E1228 07:14:22.129689 226337 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.129812 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.144714 226337 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.144779 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.157963 226337 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.158032 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.178330 226337 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.178399 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.207266 226337 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.207333 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.238212 226337 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.238280 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.254139 226337 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.254169 226337 logs.go:123] Gathering logs for kubelet ...
I1228 07:14:22.254181 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:14:22.337714 226337 logs.go:123] Gathering logs for dmesg ...
I1228 07:14:22.337754 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:14:22.358014 226337 logs.go:123] Gathering logs for describe nodes ...
I1228 07:14:22.358048 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:14:22.449954 226337 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:14:22.438828 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.440075 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.441129 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443104 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443902 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:14:22.438828 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.440075 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.441129 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443104 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443902 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1228 07:14:22.449975 226337 logs.go:123] Gathering logs for Docker ...
I1228 07:14:22.449988 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1228 07:14:22.475409 226337 logs.go:123] Gathering logs for container status ...
I1228 07:14:22.475444 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1228 07:14:22.540684 226337 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.540729 226337 out.go:285] *
*
W1228 07:14:22.540781 226337 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.540794 226337 out.go:285] *
*
W1228 07:14:22.541043 226337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:14:22.548492 226337 out.go:203]
W1228 07:14:22.550664 226337 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.550723 226337 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:14:22.550747 226337 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:14:22.553893 226337 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-649810 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:14:23.139509994 +0000 UTC m=+2785.116132004
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-649810
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-649810:
-- stdout --
[
{
"Id": "22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b",
"Created": "2025-12-28T07:06:04.639169024Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 227274,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-28T07:06:04.727903456Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
"ResolvConfPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/hostname",
"HostsPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/hosts",
"LogPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b-json.log",
"Name": "/force-systemd-flag-649810",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-649810:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-649810",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b",
"LowerDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31-init/diff:/var/lib/docker/overlay2/ecb99d95c7e8ff1804547a73cc82a9ed1888766e4e833c4a7b53fdf298df8f33/diff",
"MergedDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/merged",
"UpperDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/diff",
"WorkDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "force-systemd-flag-649810",
"Source": "/var/lib/docker/volumes/force-systemd-flag-649810/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "force-systemd-flag-649810",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-649810",
"name.minikube.sigs.k8s.io": "force-systemd-flag-649810",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c82fcd188bdccad83897d9dad70a1e8b0384eaa2e7e48e61d87f2b1735f3825e",
"SandboxKey": "/var/run/docker/netns/c82fcd188bdc",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32999"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33000"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33003"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33001"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33002"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-649810": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "26:93:92:89:1d:e6",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "5690f8910d0a1179b8b5010fa140b6129798301441766f5c0c359c3fe684e086",
"EndpointID": "10f2940db2d73e005211731a2d4ce5981fafdb9236e6a598dbb03954bbef38ac",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-649810",
"22a55950ba96"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-649810 -n force-systemd-flag-649810
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-649810 -n force-systemd-flag-649810: exit status 6 (378.341303ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:14:23.532512 239159 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-649810" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-649810 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ -p cilium-436830 sudo systemctl cat docker --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cat /etc/docker/daemon.json │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo docker system info │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl status cri-docker --all --full --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl cat cri-docker --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cri-dockerd --version │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl status containerd --all --full --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl cat containerd --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cat /lib/systemd/system/containerd.service │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo cat /etc/containerd/config.toml │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo containerd config dump │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl status crio --all --full --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo systemctl cat crio --no-pager │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ -p cilium-436830 sudo crio config │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ delete │ -p cilium-436830 │ cilium-436830 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
│ start │ -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-env-475689 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ delete │ -p offline-docker-575789 │ offline-docker-575789 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
│ start │ -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-flag-649810 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ │
│ ssh │ force-systemd-env-475689 ssh docker info --format {{.CgroupDriver}} │ force-systemd-env-475689 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
│ delete │ -p force-systemd-env-475689 │ force-systemd-env-475689 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
│ start │ -p docker-flags-974112 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ docker-flags-974112 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ │
│ ssh │ force-systemd-flag-649810 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-649810 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/28 07:14:21
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1228 07:14:21.473844 238674 out.go:360] Setting OutFile to fd 1 ...
I1228 07:14:21.474037 238674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:14:21.474052 238674 out.go:374] Setting ErrFile to fd 2...
I1228 07:14:21.474058 238674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:14:21.474431 238674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 07:14:21.474966 238674 out.go:368] Setting JSON to false
I1228 07:14:21.475861 238674 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3411,"bootTime":1766902651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1228 07:14:21.476004 238674 start.go:143] virtualization:
I1228 07:14:21.479677 238674 out.go:179] * [docker-flags-974112] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1228 07:14:21.484111 238674 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:14:21.484173 238674 notify.go:221] Checking for updates...
I1228 07:14:21.490671 238674 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:14:21.493884 238674 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
I1228 07:14:21.497954 238674 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
I1228 07:14:21.501067 238674 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1228 07:14:21.504148 238674 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:14:21.507748 238674 config.go:182] Loaded profile config "force-systemd-flag-649810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 07:14:21.507881 238674 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:14:21.536812 238674 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1228 07:14:21.536933 238674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:14:21.616406 238674 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:14:21.60654715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:14:21.616509 238674 docker.go:319] overlay module found
I1228 07:14:21.619807 238674 out.go:179] * Using the docker driver based on user configuration
I1228 07:14:21.622822 238674 start.go:309] selected driver: docker
I1228 07:14:21.622842 238674 start.go:928] validating driver "docker" against <nil>
I1228 07:14:21.622867 238674 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:14:21.623672 238674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:14:21.676457 238674 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:14:21.667211937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:14:21.676611 238674 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1228 07:14:21.676826 238674 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I1228 07:14:21.679761 238674 out.go:179] * Using Docker driver with root privileges
I1228 07:14:21.682680 238674 cni.go:84] Creating CNI manager for ""
I1228 07:14:21.682756 238674 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:14:21.682769 238674 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1228 07:14:21.682848 238674 start.go:353] cluster config:
{Name:docker-flags-974112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-974112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:14:21.686022 238674 out.go:179] * Starting "docker-flags-974112" primary control-plane node in "docker-flags-974112" cluster
I1228 07:14:21.688888 238674 cache.go:134] Beginning downloading kic base image for docker with docker
I1228 07:14:21.691851 238674 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:14:21.694834 238674 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:14:21.694889 238674 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1228 07:14:21.694904 238674 cache.go:65] Caching tarball of preloaded images
I1228 07:14:21.694921 238674 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:14:21.695005 238674 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1228 07:14:21.695015 238674 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1228 07:14:21.695129 238674 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/docker-flags-974112/config.json ...
I1228 07:14:21.695145 238674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/docker-flags-974112/config.json: {Name:mk6cba84f3d902f4079b5b5328111f916ed3e3de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:14:21.713913 238674 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:14:21.713936 238674 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:14:21.713957 238674 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:14:21.713989 238674 start.go:360] acquireMachinesLock for docker-flags-974112: {Name:mkb59a147fc69d050468884d4c5766ddc83a8325 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:14:21.714109 238674 start.go:364] duration metric: took 99.464µs to acquireMachinesLock for "docker-flags-974112"
I1228 07:14:21.714136 238674 start.go:93] Provisioning new machine with config: &{Name:docker-flags-974112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-974112 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1228 07:14:21.714208 238674 start.go:125] createHost starting for "" (driver="docker")
I1228 07:14:22.104356 226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1228 07:14:22.104394 226337 kubeadm.go:319]
I1228 07:14:22.104466 226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:14:22.105084 226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:14:22.105135 226337 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:14:22.105225 226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:14:22.105279 226337 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:14:22.105313 226337 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:14:22.105359 226337 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:14:22.105408 226337 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:14:22.105455 226337 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:14:22.105503 226337 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:14:22.105551 226337 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:14:22.105600 226337 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:14:22.105645 226337 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:14:22.105693 226337 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:14:22.105739 226337 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:14:22.105812 226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:14:22.105907 226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:14:22.105996 226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:14:22.106058 226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:14:22.109591 226337 out.go:252] - Generating certificates and keys ...
I1228 07:14:22.109685 226337 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:14:22.109750 226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:14:22.109825 226337 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1228 07:14:22.109891 226337 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1228 07:14:22.109960 226337 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1228 07:14:22.110013 226337 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1228 07:14:22.110076 226337 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1228 07:14:22.110138 226337 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1228 07:14:22.110212 226337 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1228 07:14:22.110285 226337 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1228 07:14:22.110323 226337 kubeadm.go:319] [certs] Using the existing "sa" key
I1228 07:14:22.110393 226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:14:22.110444 226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:14:22.110501 226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:14:22.110554 226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:14:22.110617 226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:14:22.110671 226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:14:22.110755 226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:14:22.110820 226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:14:22.113628 226337 out.go:252] - Booting up control plane ...
I1228 07:14:22.113810 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:14:22.113959 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:14:22.114042 226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:14:22.114156 226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:14:22.114258 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:14:22.114370 226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:14:22.114461 226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:14:22.114503 226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:14:22.114643 226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:14:22.114755 226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:14:22.114825 226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000416054s
I1228 07:14:22.114829 226337 kubeadm.go:319]
I1228 07:14:22.114889 226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:14:22.114944 226337 kubeadm.go:319] - The kubelet is not running
I1228 07:14:22.115058 226337 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:14:22.115063 226337 kubeadm.go:319]
I1228 07:14:22.115176 226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:14:22.115211 226337 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:14:22.115243 226337 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:14:22.115303 226337 kubeadm.go:403] duration metric: took 8m5.594328388s to StartCluster
I1228 07:14:22.115380 226337 ssh_runner.go:195] Run: sudo runc list -f json
I1228 07:14:22.115458 226337 kubeadm.go:319]
E1228 07:14:22.129689 226337 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.129812 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.144714 226337 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.144779 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.157963 226337 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.158032 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.178330 226337 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.178399 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.207266 226337 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.207333 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.238212 226337 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.238280 226337 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:14:22.254139 226337 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:14:22.254169 226337 logs.go:123] Gathering logs for kubelet ...
I1228 07:14:22.254181 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:14:22.337714 226337 logs.go:123] Gathering logs for dmesg ...
I1228 07:14:22.337754 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:14:22.358014 226337 logs.go:123] Gathering logs for describe nodes ...
I1228 07:14:22.358048 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:14:22.449954 226337 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:14:22.438828 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.440075 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.441129 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443104 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443902 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:14:22.438828 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.440075 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.441129 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443104 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:22.443902 5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1228 07:14:22.449975 226337 logs.go:123] Gathering logs for Docker ...
I1228 07:14:22.449988 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1228 07:14:22.475409 226337 logs.go:123] Gathering logs for container status ...
I1228 07:14:22.475444 226337 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1228 07:14:22.540684 226337 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.540729 226337 out.go:285] *
W1228 07:14:22.540781 226337 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.540794 226337 out.go:285] *
W1228 07:14:22.541043 226337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:14:22.548492 226337 out.go:203]
W1228 07:14:22.550664 226337 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000416054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1228 07:14:22.550723 226337 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:14:22.550747 226337 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:14:22.553893 226337 out.go:203]
==> Docker <==
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.528721133Z" level=info msg="Restoring containers: start."
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.556618574Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.580654942Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.753866545Z" level=info msg="Loading containers: done."
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.769990297Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.770166858Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.770266330Z" level=info msg="Initializing buildkit"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.790263267Z" level=info msg="Completed buildkit initialization"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.795789002Z" level=info msg="Daemon has completed initialization"
Dec 28 07:06:13 force-systemd-flag-649810 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805442276Z" level=info msg="API listen on /var/run/docker.sock"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805588609Z" level=info msg="API listen on /run/docker.sock"
Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805605306Z" level=info msg="API listen on [::]:2376"
Dec 28 07:06:14 force-systemd-flag-649810 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Start docker client with request timeout 0s"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Loaded network plugin cni"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Docker cri networking managed by network plugin cni"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Setting cgroupDriver systemd"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Start cri-dockerd grpc backend"
Dec 28 07:06:14 force-systemd-flag-649810 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:14:24.208502 5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:24.209210 5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:24.210898 5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:24.211504 5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:14:24.213121 5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec28 06:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015148] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.500432] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034760] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.784008] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.137634] kauditd_printk_skb: 36 callbacks suppressed
[Dec28 06:42] hrtimer: interrupt took 11242004 ns
==> kernel <==
07:14:24 up 56 min, 0 user, load average: 0.63, 0.93, 1.82
Linux force-systemd-flag-649810 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 28 07:14:20 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:21 force-systemd-flag-649810 kubelet[5433]: E1228 07:14:21.561026 5433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:22 force-systemd-flag-649810 kubelet[5482]: E1228 07:14:22.343595 5482 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:23 force-systemd-flag-649810 kubelet[5532]: E1228 07:14:23.148283 5532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:14:24 force-systemd-flag-649810 kubelet[5632]: E1228 07:14:24.120612 5632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:14:24 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:14:24 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-649810 -n force-systemd-flag-649810
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-649810 -n force-systemd-flag-649810: exit status 6 (474.282279ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:14:24.819902 239370 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-649810" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-649810" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-649810" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-649810
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-649810: (2.073436115s)
--- FAIL: TestForceSystemdFlag (507.14s)