=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 109 (9m21.5507796s)
-- stdout --
* [force-systemd-flag-550200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22352
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "force-systemd-flag-550200" primary control-plane node in "force-systemd-flag-550200" cluster
* Pulling base image v0.0.48-1766884053-22351 ...
-- /stdout --
** stderr **
I1228 07:19:09.945542 10956 out.go:360] Setting OutFile to fd 1596 ...
I1228 07:19:10.026353 10956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:19:10.026419 10956 out.go:374] Setting ErrFile to fd 1664...
I1228 07:19:10.026444 10956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:19:10.044977 10956 out.go:368] Setting JSON to false
I1228 07:19:10.048224 10956 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6289,"bootTime":1766900060,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W1228 07:19:10.048224 10956 start.go:141] gopshost.Virtualization returned error: not implemented yet
I1228 07:19:10.053565 10956 out.go:179] * [force-systemd-flag-550200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
I1228 07:19:10.060278 10956 notify.go:221] Checking for updates...
I1228 07:19:10.065228 10956 out.go:179] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I1228 07:19:10.070252 10956 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:19:10.075776 10956 out.go:179] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I1228 07:19:10.083237 10956 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:19:10.086684 10956 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:19:10.091324 10956 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:19:10.249711 10956 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
I1228 07:19:10.253911 10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:19:10.682141 10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-28 07:19:10.661909501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1228 07:19:10.686141 10956 out.go:179] * Using the docker driver based on user configuration
I1228 07:19:10.689153 10956 start.go:309] selected driver: docker
I1228 07:19:10.689153 10956 start.go:928] validating driver "docker" against <nil>
I1228 07:19:10.689153 10956 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:19:10.696144 10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:19:11.041541 10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-28 07:19:11.020193516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1228 07:19:11.041541 10956 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1228 07:19:11.042541 10956 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1228 07:19:11.400618 10956 out.go:179] * Using Docker Desktop driver with root privileges
I1228 07:19:11.414529 10956 cni.go:84] Creating CNI manager for ""
I1228 07:19:11.414529 10956 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:19:11.414529 10956 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1228 07:19:11.414529 10956 start.go:353] cluster config:
{Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:19:11.452563 10956 out.go:179] * Starting "force-systemd-flag-550200" primary control-plane node in "force-systemd-flag-550200" cluster
I1228 07:19:11.466043 10956 cache.go:134] Beginning downloading kic base image for docker with docker
I1228 07:19:11.486723 10956 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:19:11.499037 10956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:19:11.499073 10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:19:11.499299 10956 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1228 07:19:11.499367 10956 cache.go:65] Caching tarball of preloaded images
I1228 07:19:11.499657 10956 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1228 07:19:11.499816 10956 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1228 07:19:11.500398 10956 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json ...
I1228 07:19:11.500584 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json: {Name:mkc4f0fcb183c76eff9b9a6f79aae1fd565a77e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:11.578656 10956 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:19:11.578656 10956 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:19:11.578656 10956 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:19:11.578656 10956 start.go:360] acquireMachinesLock for force-systemd-flag-550200: {Name:mk1102644977e3c3e95d5da7d5c083d9caab1082 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:19:11.578656 10956 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-550200"
I1228 07:19:11.578656 10956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1228 07:19:11.579658 10956 start.go:125] createHost starting for "" (driver="docker")
I1228 07:19:11.702088 10956 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1228 07:19:11.702738 10956 start.go:159] libmachine.API.Create for "force-systemd-flag-550200" (driver="docker")
I1228 07:19:11.702828 10956 client.go:173] LocalClient.Create starting
I1228 07:19:11.703359 10956 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
I1228 07:19:11.703644 10956 main.go:144] libmachine: Decoding PEM data...
I1228 07:19:11.703693 10956 main.go:144] libmachine: Parsing certificate...
I1228 07:19:11.703929 10956 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
I1228 07:19:11.703986 10956 main.go:144] libmachine: Decoding PEM data...
I1228 07:19:11.703986 10956 main.go:144] libmachine: Parsing certificate...
I1228 07:19:11.712765 10956 cli_runner.go:164] Run: docker network inspect force-systemd-flag-550200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 07:19:11.773354 10956 cli_runner.go:211] docker network inspect force-systemd-flag-550200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 07:19:11.777358 10956 network_create.go:284] running [docker network inspect force-systemd-flag-550200] to gather additional debugging logs...
I1228 07:19:11.777358 10956 cli_runner.go:164] Run: docker network inspect force-systemd-flag-550200
W1228 07:19:11.825357 10956 cli_runner.go:211] docker network inspect force-systemd-flag-550200 returned with exit code 1
I1228 07:19:11.825357 10956 network_create.go:287] error running [docker network inspect force-systemd-flag-550200]: docker network inspect force-systemd-flag-550200: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-550200 not found
I1228 07:19:11.825357 10956 network_create.go:289] output of [docker network inspect force-systemd-flag-550200]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-550200 not found
** /stderr **
I1228 07:19:11.830360 10956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:19:11.905372 10956 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:19:11.937356 10956 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:19:11.954358 10956 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001772690}
I1228 07:19:11.954358 10956 network_create.go:124] attempt to create docker network force-systemd-flag-550200 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1228 07:19:11.957357 10956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200
W1228 07:19:12.027561 10956 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200 returned with exit code 1
W1228 07:19:12.027561 10956 network_create.go:149] failed to create docker network force-systemd-flag-550200 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200: exit status 1
stdout:
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1228 07:19:12.027561 10956 network_create.go:116] failed to create docker network force-systemd-flag-550200 192.168.67.0/24, will retry: subnet is taken
I1228 07:19:12.059559 10956 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:19:12.091557 10956 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:19:12.123547 10956 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1228 07:19:12.137565 10956 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001862060}
I1228 07:19:12.137565 10956 network_create.go:124] attempt to create docker network force-systemd-flag-550200 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I1228 07:19:12.141555 10956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-550200 force-systemd-flag-550200
I1228 07:19:12.308553 10956 network_create.go:108] docker network force-systemd-flag-550200 192.168.94.0/24 created
I1228 07:19:12.308553 10956 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-550200" container
I1228 07:19:12.315552 10956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1228 07:19:12.394439 10956 cli_runner.go:164] Run: docker volume create force-systemd-flag-550200 --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true
I1228 07:19:12.455418 10956 oci.go:103] Successfully created a docker volume force-systemd-flag-550200
I1228 07:19:12.459420 10956 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-550200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --entrypoint /usr/bin/test -v force-systemd-flag-550200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
I1228 07:19:14.418157 10956 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-550200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --entrypoint /usr/bin/test -v force-systemd-flag-550200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib: (1.9587086s)
I1228 07:19:14.418157 10956 oci.go:107] Successfully prepared a docker volume force-systemd-flag-550200
I1228 07:19:14.418157 10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:19:14.418157 10956 kic.go:194] Starting extracting preloaded images to volume ...
I1228 07:19:14.429082 10956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-550200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
I1228 07:20:02.662084 10956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-550200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (48.2323051s)
I1228 07:20:02.662084 10956 kic.go:203] duration metric: took 48.2432293s to extract preloaded images to volume ...
I1228 07:20:02.668547 10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:20:03.082145 10956 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:89 SystemTime:2025-12-28 07:20:03.060406893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1228 07:20:03.086147 10956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1228 07:20:03.526431 10956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-550200 --name force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-550200 --network force-systemd-flag-550200 --ip 192.168.94.2 --volume force-systemd-flag-550200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
I1228 07:20:06.127125 10956 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-550200 --name force-systemd-flag-550200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-550200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-550200 --network force-systemd-flag-550200 --ip 192.168.94.2 --volume force-systemd-flag-550200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1: (2.6005485s)
I1228 07:20:06.132308 10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Running}}
I1228 07:20:06.199542 10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
I1228 07:20:06.266538 10956 cli_runner.go:164] Run: docker exec force-systemd-flag-550200 stat /var/lib/dpkg/alternatives/iptables
I1228 07:20:06.405553 10956 oci.go:144] the created container "force-systemd-flag-550200" has a running status.
I1228 07:20:06.405553 10956 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa...
I1228 07:20:06.576001 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1228 07:20:06.591993 10956 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1228 07:20:06.682003 10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
I1228 07:20:06.760031 10956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1228 07:20:06.760031 10956 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-550200 chown docker:docker /home/docker/.ssh/authorized_keys]
I1228 07:20:06.905852 10956 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa...
I1228 07:20:09.856351 10956 cli_runner.go:164] Run: docker container inspect force-systemd-flag-550200 --format={{.State.Status}}
I1228 07:20:09.920362 10956 machine.go:94] provisionDockerMachine start ...
I1228 07:20:09.926359 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:10.000354 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:10.018361 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:10.018361 10956 main.go:144] libmachine: About to run SSH command:
hostname
I1228 07:20:10.206226 10956 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-550200
I1228 07:20:10.206321 10956 ubuntu.go:182] provisioning hostname "force-systemd-flag-550200"
I1228 07:20:10.211897 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:10.267874 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:10.267874 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:10.267874 10956 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-550200 && echo "force-systemd-flag-550200" | sudo tee /etc/hostname
I1228 07:20:10.456395 10956 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-550200
I1228 07:20:10.460844 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:10.523621 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:10.524620 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:10.524620 10956 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-550200' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-550200/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-550200' | sudo tee -a /etc/hosts;
fi
fi
I1228 07:20:10.684283 10956 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1228 07:20:10.684283 10956 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
I1228 07:20:10.684853 10956 ubuntu.go:190] setting up certificates
I1228 07:20:10.684853 10956 provision.go:84] configureAuth start
I1228 07:20:10.688659 10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
I1228 07:20:10.747751 10956 provision.go:143] copyHostCerts
I1228 07:20:10.747751 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
I1228 07:20:10.747751 10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
I1228 07:20:10.747751 10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
I1228 07:20:10.747751 10956 provision.go:87] duration metric: took 62.8974ms to configureAuth
W1228 07:20:10.747751 10956 ubuntu.go:193] configureAuth failed: transferring file: &{BaseAsset:{SourcePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem TargetDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube TargetName:ca.pem Permissions:0777 Source:} reader:0xc001e120c0 writer:<nil> file:0xc00082cad8}: error removing file C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem: remove C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem: The process cannot access the file because it is being used by another process.
I1228 07:20:10.748747 10956 retry.go:84] will retry after 0s: Temporary Error: transferring file: &{BaseAsset:{SourcePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem TargetDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube TargetName:ca.pem Permissions:0777 Source:} reader:0xc001e120c0 writer:<nil> file:0xc00082cad8}: error removing file C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem: remove C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem: The process cannot access the file because it is being used by another process.
I1228 07:20:10.749747 10956 provision.go:84] configureAuth start
I1228 07:20:10.752749 10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
I1228 07:20:10.804747 10956 provision.go:143] copyHostCerts
I1228 07:20:10.804747 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
I1228 07:20:10.805744 10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
I1228 07:20:10.805744 10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
I1228 07:20:10.805744 10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
I1228 07:20:10.806758 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
I1228 07:20:10.806758 10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
I1228 07:20:10.806758 10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
I1228 07:20:10.806758 10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
I1228 07:20:10.807752 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
I1228 07:20:10.807752 10956 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
I1228 07:20:10.807752 10956 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
I1228 07:20:10.807752 10956 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
I1228 07:20:10.808749 10956 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-550200 san=[127.0.0.1 192.168.94.2 force-systemd-flag-550200 localhost minikube]
I1228 07:20:10.979957 10956 provision.go:177] copyRemoteCerts
I1228 07:20:10.985236 10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1228 07:20:10.989412 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:11.040612 10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
I1228 07:20:11.161907 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I1228 07:20:11.161907 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1228 07:20:11.190869 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I1228 07:20:11.191871 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
I1228 07:20:11.217867 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I1228 07:20:11.217867 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1228 07:20:11.242871 10956 provision.go:87] duration metric: took 493.1172ms to configureAuth
I1228 07:20:11.242871 10956 ubuntu.go:206] setting minikube options for container-runtime
I1228 07:20:11.242871 10956 config.go:182] Loaded profile config "force-systemd-flag-550200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 07:20:11.246869 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:11.302720 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:11.302919 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:11.302919 10956 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1228 07:20:11.494758 10956 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1228 07:20:11.494787 10956 ubuntu.go:71] root file system type: overlay
I1228 07:20:11.494985 10956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1228 07:20:11.501016 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:11.560421 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:11.561432 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:11.561432 10956 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1228 07:20:11.738665 10956 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1228 07:20:11.741664 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:11.810983 10956 main.go:144] libmachine: Using SSH client type: native
I1228 07:20:11.812244 10956 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7c03fe200] 0x7ff7c0400d60 <nil> [] 0s} 127.0.0.1 54898 <nil> <nil>}
I1228 07:20:11.812244 10956 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1228 07:20:14.311277 10956 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:48:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-28 07:20:11.730362367 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1228 07:20:14.311277 10956 machine.go:97] duration metric: took 4.3908512s to provisionDockerMachine
I1228 07:20:14.311277 10956 client.go:176] duration metric: took 1m2.6075446s to LocalClient.Create
I1228 07:20:14.311277 10956 start.go:167] duration metric: took 1m2.6076338s to libmachine.API.Create "force-systemd-flag-550200"
I1228 07:20:14.311277 10956 start.go:293] postStartSetup for "force-systemd-flag-550200" (driver="docker")
I1228 07:20:14.311277 10956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1228 07:20:14.316543 10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1228 07:20:14.321090 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:14.371309 10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
I1228 07:20:14.553998 10956 ssh_runner.go:195] Run: cat /etc/os-release
I1228 07:20:14.564061 10956 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1228 07:20:14.564061 10956 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1228 07:20:14.564061 10956 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
I1228 07:20:14.564061 10956 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
I1228 07:20:14.564975 10956 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> 135562.pem in /etc/ssl/certs
I1228 07:20:14.564975 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /etc/ssl/certs/135562.pem
I1228 07:20:14.570653 10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1228 07:20:14.586316 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /etc/ssl/certs/135562.pem (1708 bytes)
I1228 07:20:14.621463 10956 start.go:296] duration metric: took 310.1814ms for postStartSetup
I1228 07:20:14.630457 10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
I1228 07:20:14.695864 10956 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\config.json ...
I1228 07:20:14.706527 10956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1228 07:20:14.710655 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:14.765477 10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
I1228 07:20:14.902812 10956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1228 07:20:14.914128 10956 start.go:128] duration metric: took 1m3.3335553s to createHost
I1228 07:20:14.914128 10956 start.go:83] releasing machines lock for "force-systemd-flag-550200", held for 1m3.3345576s
I1228 07:20:14.919119 10956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-550200
I1228 07:20:14.988126 10956 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I1228 07:20:14.991128 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:14.992125 10956 ssh_runner.go:195] Run: cat /version.json
I1228 07:20:14.997119 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:15.058196 10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
I1228 07:20:15.066206 10956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54898 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-550200\id_rsa Username:docker}
W1228 07:20:15.275369 10956 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:
stderr:
bash: line 1: curl.exe: command not found
I1228 07:20:15.284377 10956 ssh_runner.go:195] Run: systemctl --version
I1228 07:20:15.307126 10956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1228 07:20:15.316403 10956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1228 07:20:15.323072 10956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1228 07:20:15.375677 10956 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1228 07:20:15.375677 10956 start.go:496] detecting cgroup driver to use...
I1228 07:20:15.375677 10956 start.go:500] using "systemd" cgroup driver as enforced via flags
I1228 07:20:15.375677 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
W1228 07:20:15.379672 10956 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
W1228 07:20:15.379672 10956 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I1228 07:20:15.402674 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1228 07:20:15.432172 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1228 07:20:15.449211 10956 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1228 07:20:15.454795 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1228 07:20:15.479910 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:20:15.496899 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1228 07:20:15.515899 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:20:15.534898 10956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1228 07:20:15.550898 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1228 07:20:15.569174 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1228 07:20:15.596504 10956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1228 07:20:15.621758 10956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1228 07:20:15.639057 10956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1228 07:20:15.655039 10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:20:15.840106 10956 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1228 07:20:16.048914 10956 start.go:496] detecting cgroup driver to use...
I1228 07:20:16.048988 10956 start.go:500] using "systemd" cgroup driver as enforced via flags
I1228 07:20:16.055392 10956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1228 07:20:16.083565 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1228 07:20:16.106782 10956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1228 07:20:16.178346 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1228 07:20:16.204187 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1228 07:20:16.223991 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1228 07:20:16.257916 10956 ssh_runner.go:195] Run: which cri-dockerd
I1228 07:20:16.276284 10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1228 07:20:16.291306 10956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1228 07:20:16.318294 10956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1228 07:20:16.510441 10956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1228 07:20:16.659787 10956 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1228 07:20:16.659961 10956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1228 07:20:16.688458 10956 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1228 07:20:16.715937 10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:20:16.881963 10956 ssh_runner.go:195] Run: sudo systemctl restart docker
I1228 07:20:23.640310 10956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.7582068s)
I1228 07:20:23.644551 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1228 07:20:23.677445 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1228 07:20:23.706470 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1228 07:20:23.733677 10956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1228 07:20:23.901526 10956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1228 07:20:24.068194 10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:20:24.273582 10956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1228 07:20:24.301765 10956 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1228 07:20:24.325840 10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:20:24.469463 10956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1228 07:20:24.579819 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1228 07:20:24.687464 10956 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1228 07:20:24.693572 10956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1228 07:20:24.702875 10956 start.go:574] Will wait 60s for crictl version
I1228 07:20:24.707624 10956 ssh_runner.go:195] Run: which crictl
I1228 07:20:24.724794 10956 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1228 07:20:24.791350 10956 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1228 07:20:24.796094 10956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1228 07:20:24.838895 10956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1228 07:20:24.889882 10956 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1228 07:20:24.893890 10956 cli_runner.go:164] Run: docker exec -t force-systemd-flag-550200 dig +short host.docker.internal
I1228 07:20:25.028382 10956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I1228 07:20:25.035386 10956 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I1228 07:20:25.044398 10956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:20:25.063389 10956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-550200
I1228 07:20:25.116387 10956 kubeadm.go:884] updating cluster {Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1228 07:20:25.116387 10956 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:20:25.119394 10956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1228 07:20:25.153386 10956 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1228 07:20:25.153386 10956 docker.go:624] Images already preloaded, skipping extraction
I1228 07:20:25.157386 10956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1228 07:20:25.189391 10956 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1228 07:20:25.189391 10956 cache_images.go:86] Images are preloaded, skipping loading
I1228 07:20:25.189391 10956 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 docker true true} ...
I1228 07:20:25.189391 10956 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-550200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1228 07:20:25.194408 10956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1228 07:20:25.273139 10956 cni.go:84] Creating CNI manager for ""
I1228 07:20:25.273139 10956 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:20:25.273139 10956 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1228 07:20:25.273139 10956 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-550200 NodeName:force-systemd-flag-550200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1228 07:20:25.273139 10956 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.94.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-550200"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.94.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1228 07:20:25.277124 10956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1228 07:20:25.290122 10956 binaries.go:51] Found k8s binaries, skipping transfer
I1228 07:20:25.295121 10956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1228 07:20:25.307126 10956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1228 07:20:25.327132 10956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1228 07:20:25.347327 10956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1228 07:20:25.374403 10956 ssh_runner.go:195] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts
I1228 07:20:25.381332 10956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:20:25.402504 10956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:20:25.573471 10956 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1228 07:20:25.596187 10956 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200 for IP: 192.168.94.2
I1228 07:20:25.596187 10956 certs.go:195] generating shared ca certs ...
I1228 07:20:25.596187 10956 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.596721 10956 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
I1228 07:20:25.597181 10956 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
I1228 07:20:25.597298 10956 certs.go:257] generating profile certs ...
I1228 07:20:25.597890 10956 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key
I1228 07:20:25.598089 10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt with IP's: []
I1228 07:20:25.671035 10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt ...
I1228 07:20:25.671035 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.crt: {Name:mkeca33edbc926c4db6950fc71e673d941c9c5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.671859 10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key ...
I1228 07:20:25.671859 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\client.key: {Name:mka0644c661ad6783cefb18b8a346f500e1e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.672865 10956 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517
I1228 07:20:25.672865 10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
I1228 07:20:25.758013 10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 ...
I1228 07:20:25.758013 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517: {Name:mkd62fa7e059cebd6c3b5a6c81d7fbfca6ad136f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.758013 10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517 ...
I1228 07:20:25.758013 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517: {Name:mkf8cf27f77f5c6d7677777eb24e4b3275e15fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.759780 10956 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt.e9e44517 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt
I1228 07:20:25.770464 10956 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key.e9e44517 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key
I1228 07:20:25.788622 10956 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key
I1228 07:20:25.789200 10956 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt with IP's: []
I1228 07:20:25.874065 10956 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt ...
I1228 07:20:25.874065 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt: {Name:mk5776f318ae9538168ceb01c5acd41dab52c41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.874431 10956 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key ...
I1228 07:20:25.874431 10956 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key: {Name:mked8db6b11b02c360e25d18a4e35f554d068b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:20:25.875770 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I1228 07:20:25.876171 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I1228 07:20:25.876299 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1228 07:20:25.876380 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1228 07:20:25.876380 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1228 07:20:25.876380 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1228 07:20:25.876380 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1228 07:20:25.887020 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1228 07:20:25.887795 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem (1338 bytes)
W1228 07:20:25.888586 10956 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556_empty.pem, impossibly tiny 0 bytes
I1228 07:20:25.888586 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I1228 07:20:25.888845 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
I1228 07:20:25.889071 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I1228 07:20:25.889260 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I1228 07:20:25.889435 10956 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem (1708 bytes)
I1228 07:20:25.889435 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem -> /usr/share/ca-certificates/135562.pem
I1228 07:20:25.889435 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1228 07:20:25.889435 10956 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem -> /usr/share/ca-certificates/13556.pem
I1228 07:20:25.890363 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1228 07:20:25.923428 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1228 07:20:25.959837 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1228 07:20:25.986565 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1228 07:20:26.020380 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1228 07:20:26.050111 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1228 07:20:26.076534 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1228 07:20:26.106184 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-550200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1228 07:20:26.140917 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\135562.pem --> /usr/share/ca-certificates/135562.pem (1708 bytes)
I1228 07:20:26.172919 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1228 07:20:26.202376 10956 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13556.pem --> /usr/share/ca-certificates/13556.pem (1338 bytes)
I1228 07:20:26.235084 10956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1228 07:20:26.258400 10956 ssh_runner.go:195] Run: openssl version
I1228 07:20:26.272403 10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/135562.pem
I1228 07:20:26.288416 10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/135562.pem /etc/ssl/certs/135562.pem
I1228 07:20:26.304399 10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135562.pem
I1228 07:20:26.312400 10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:37 /usr/share/ca-certificates/135562.pem
I1228 07:20:26.315406 10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135562.pem
I1228 07:20:26.366042 10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1228 07:20:26.382049 10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/135562.pem /etc/ssl/certs/3ec20f2e.0
I1228 07:20:26.398047 10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1228 07:20:26.414042 10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1228 07:20:26.435914 10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1228 07:20:26.443191 10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:29 /usr/share/ca-certificates/minikubeCA.pem
I1228 07:20:26.447188 10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1228 07:20:26.494187 10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1228 07:20:26.512188 10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1228 07:20:26.531196 10956 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13556.pem
I1228 07:20:26.548200 10956 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13556.pem /etc/ssl/certs/13556.pem
I1228 07:20:26.566191 10956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13556.pem
I1228 07:20:26.573196 10956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:37 /usr/share/ca-certificates/13556.pem
I1228 07:20:26.578180 10956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13556.pem
I1228 07:20:26.633725 10956 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1228 07:20:26.651406 10956 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13556.pem /etc/ssl/certs/51391683.0
I1228 07:20:26.666405 10956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1228 07:20:26.673401 10956 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1228 07:20:26.673401 10956 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-550200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-550200 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:20:26.676399 10956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1228 07:20:26.715823 10956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1228 07:20:26.736007 10956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1228 07:20:26.751952 10956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:20:26.757429 10956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:20:26.776073 10956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:20:26.776073 10956 kubeadm.go:158] found existing configuration files:
I1228 07:20:26.780843 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:20:26.799092 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:20:26.802999 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:20:26.827908 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:20:26.840488 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:20:26.844486 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:20:26.859485 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:20:26.872483 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:20:26.875473 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:20:26.891474 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:20:26.903476 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:20:26.908483 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:20:26.926495 10956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:20:27.080496 10956 kubeadm.go:319] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1228 07:20:27.180486 10956 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:20:27.323507 10956 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:24:29.087485 10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1228 07:24:29.087593 10956 kubeadm.go:319]
I1228 07:24:29.087827 10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:24:29.093448 10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:24:29.093586 10956 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:24:29.093846 10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:24:29.094037 10956 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1228 07:24:29.094198 10956 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1228 07:24:29.094198 10956 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1228 07:24:29.094198 10956 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1228 07:24:29.094198 10956 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1228 07:24:29.094198 10956 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1228 07:24:29.094732 10956 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1228 07:24:29.094804 10956 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1228 07:24:29.094804 10956 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1228 07:24:29.094804 10956 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1228 07:24:29.094804 10956 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1228 07:24:29.095343 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1228 07:24:29.095450 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1228 07:24:29.095691 10956 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1228 07:24:29.095894 10956 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1228 07:24:29.096129 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1228 07:24:29.096334 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1228 07:24:29.096481 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1228 07:24:29.096641 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1228 07:24:29.096745 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1228 07:24:29.096903 10956 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1228 07:24:29.096903 10956 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1228 07:24:29.096903 10956 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1228 07:24:29.096903 10956 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1228 07:24:29.097575 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:24:29.097666 10956 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:24:29.098188 10956 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:24:29.098245 10956 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:24:29.098308 10956 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:24:29.098439 10956 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:24:29.098499 10956 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:24:29.098587 10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:24:29.098743 10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:24:29.098788 10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:24:29.098788 10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:24:29.102647 10956 out.go:252] - Generating certificates and keys ...
I1228 07:24:29.102647 10956 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:24:29.102647 10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:24:29.103216 10956 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1228 07:24:29.103250 10956 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1228 07:24:29.103250 10956 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1228 07:24:29.103250 10956 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1228 07:24:29.103250 10956 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1228 07:24:29.103908 10956 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I1228 07:24:29.103908 10956 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1228 07:24:29.103908 10956 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I1228 07:24:29.104556 10956 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1228 07:24:29.104626 10956 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1228 07:24:29.104626 10956 kubeadm.go:319] [certs] Generating "sa" key and public key
I1228 07:24:29.104626 10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:24:29.104626 10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:24:29.104626 10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:24:29.105312 10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:24:29.105560 10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:24:29.105717 10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:24:29.105953 10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:24:29.106179 10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:24:29.111391 10956 out.go:252] - Booting up control plane ...
I1228 07:24:29.111610 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:24:29.111680 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:24:29.111680 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:24:29.111680 10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:24:29.112472 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:24:29.112769 10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:24:29.113035 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:24:29.113180 10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:24:29.113223 10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:24:29.113764 10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:24:29.113975 10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001181899s
I1228 07:24:29.113975 10956 kubeadm.go:319]
I1228 07:24:29.113975 10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:24:29.113975 10956 kubeadm.go:319] - The kubelet is not running
I1228 07:24:29.113975 10956 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:24:29.113975 10956 kubeadm.go:319]
I1228 07:24:29.114500 10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:24:29.114577 10956 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:24:29.114622 10956 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:24:29.114622 10956 kubeadm.go:319]
W1228 07:24:29.114622 10956 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001181899s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-550200 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001181899s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1228 07:24:29.119113 10956 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1228 07:24:29.583949 10956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1228 07:24:29.602639 10956 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:24:29.608264 10956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:24:29.620750 10956 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:24:29.620814 10956 kubeadm.go:158] found existing configuration files:
I1228 07:24:29.625366 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:24:29.640824 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:24:29.647918 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:24:29.671692 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:24:29.685903 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:24:29.690163 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:24:29.708235 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:24:29.721408 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:24:29.725407 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:24:29.744750 10956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:24:29.759627 10956 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:24:29.765603 10956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:24:29.781606 10956 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:24:29.907425 10956 kubeadm.go:319] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1228 07:24:29.999388 10956 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:24:30.121722 10956 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:28:30.829097 10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1228 07:28:30.829193 10956 kubeadm.go:319]
I1228 07:28:30.829521 10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:28:30.834292 10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:28:30.834969 10956 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:28:30.835305 10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:28:30.835470 10956 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1228 07:28:30.835661 10956 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1228 07:28:30.835885 10956 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1228 07:28:30.836054 10956 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1228 07:28:30.836845 10956 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1228 07:28:30.836935 10956 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1228 07:28:30.837045 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1228 07:28:30.837188 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1228 07:28:30.837348 10956 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1228 07:28:30.838073 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1228 07:28:30.838754 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1228 07:28:30.838917 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1228 07:28:30.839077 10956 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:28:30.839643 10956 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:28:30.839812 10956 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:28:30.840092 10956 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:28:30.840238 10956 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:28:30.840388 10956 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:28:30.840415 10956 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:28:30.840415 10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:28:30.840415 10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:28:30.841442 10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:28:30.841442 10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:28:30.845025 10956 out.go:252] - Generating certificates and keys ...
I1228 07:28:30.845350 10956 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1228 07:28:30.846025 10956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using the existing "sa" key
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:28:30.847386 10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:28:30.862925 10956 out.go:252] - Booting up control plane ...
I1228 07:28:30.863263 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:28:30.863453 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:28:30.863599 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:28:30.863920 10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:28:30.864159 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:28:30.864402 10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:28:30.864711 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:28:30.864711 10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:28:30.864711 10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:28:30.865367 10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:28:30.865547 10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001175587s
I1228 07:28:30.865547 10956 kubeadm.go:319]
I1228 07:28:30.865705 10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:28:30.865843 10956 kubeadm.go:319] - The kubelet is not running
I1228 07:28:30.866152 10956 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:28:30.866189 10956 kubeadm.go:319]
I1228 07:28:30.866348 10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:28:30.866348 10956 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:28:30.866348 10956 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:28:30.866348 10956 kubeadm.go:319]
I1228 07:28:30.866348 10956 kubeadm.go:403] duration metric: took 8m4.1856378s to StartCluster
I1228 07:28:30.871002 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.892372 10956 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.896535 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.913072 10956 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.918213 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.943813 10956 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.950809 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.974259 10956 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.980208 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.009939 10956 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.015235 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.040595 10956 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.046595 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.070125 10956 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.070125 10956 logs.go:123] Gathering logs for kubelet ...
I1228 07:28:31.070125 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:28:31.144442 10956 logs.go:123] Gathering logs for dmesg ...
I1228 07:28:31.145427 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:28:31.190901 10956 logs.go:123] Gathering logs for describe nodes ...
I1228 07:28:31.190901 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:28:31.281419 10956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:28:31.270801 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272185 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272956 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.276447 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.277504 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:28:31.270801 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272185 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272956 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.276447 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.277504 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1228 07:28:31.281419 10956 logs.go:123] Gathering logs for Docker ...
I1228 07:28:31.281419 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1228 07:28:31.314672 10956 logs.go:123] Gathering logs for container status ...
I1228 07:28:31.314672 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1228 07:28:31.381927 10956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.381958 10956 out.go:285] *
*
W1228 07:28:31.381958 10956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.381958 10956 out.go:285] *
*
W1228 07:28:31.381958 10956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:28:31.388026 10956 out.go:203]
W1228 07:28:31.391609 10956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.391609 10956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:28:31.391609 10956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:28:31.394315 10956 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-550200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-windows-amd64.exe -p force-systemd-flag-550200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:28:32.4921253 +0000 UTC m=+3630.002556201
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-550200
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-550200:
-- stdout --
[
{
"Id": "14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f",
"Created": "2025-12-28T07:20:03.584345984Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 181825,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-28T07:20:05.584805917Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:8b8cccb9afb2a57c3d011fcf33e0403b1551aa7036e30b12a395646869801935",
"ResolvConfPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/hostname",
"HostsPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/hosts",
"LogPath": "/var/lib/docker/containers/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f/14a46fe5d933ea1efad735c6d73abe971d2a24cb985fb4251c469b7df8ed3b6f-json.log",
"Name": "/force-systemd-flag-550200",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-550200:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-550200",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 3221225472,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c-init/diff:/var/lib/docker/overlay2/755790e5dd4d70e5001883ef2a2cf79adb7d5054e85cb9aeffa64c965a5cf81c/diff",
"MergedDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/merged",
"UpperDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/diff",
"WorkDir": "/var/lib/docker/overlay2/7d460ac1e4172c0c01df46c83e1759ddbc23cdf15ffe05923b58c670d122017c/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-550200",
"Source": "/var/lib/docker/volumes/force-systemd-flag-550200/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-550200",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-550200",
"name.minikube.sigs.k8s.io": "force-systemd-flag-550200",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7b01aa59cf3dbb7fff2defbbdb819e3432807283d3712a450b4622997252042e",
"SandboxKey": "/var/run/docker/netns/7b01aa59cf3d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "54898"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "54899"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "54900"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "54901"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "54902"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-550200": {
"IPAMConfig": {
"IPv4Address": "192.168.94.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:5e:02",
"DriverOpts": null,
"NetworkID": "072878f0256b28fadd181fa98b6ffd57a25d8bc213f05f4c604fe7261bee4292",
"EndpointID": "415f7dede622756688d9ccc5c6418bf3081b28ab1cf92831c96cd01d6c45c653",
"Gateway": "192.168.94.1",
"IPAddress": "192.168.94.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-550200",
"14a46fe5d933"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-550200 -n force-systemd-flag-550200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-550200 -n force-systemd-flag-550200: exit status 6 (626.4862ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:28:33.147791 9504 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-550200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-windows-amd64.exe -p force-systemd-flag-550200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-550200 logs -n 25: (1.1171517s)
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ -p cilium-410600 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo cri-dockerd --version │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo systemctl status containerd --all --full --no-pager │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo systemctl cat containerd --no-pager │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo cat /lib/systemd/system/containerd.service │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo cat /etc/containerd/config.toml │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo containerd config dump │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo systemctl status crio --all --full --no-pager │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo systemctl cat crio --no-pager │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-410600 sudo crio config │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ delete │ -p cilium-410600 │ cilium-410600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ 28 Dec 25 07:24 UTC │
│ start │ -p force-systemd-env-970200 --memory=3072 --alsologtostderr -v=5 --driver=docker │ force-systemd-env-970200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:24 UTC │ │
│ delete │ -p stopped-upgrade-550200 │ stopped-upgrade-550200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:25 UTC │
│ start │ -p missing-upgrade-224300 --memory=3072 --driver=docker │ missing-upgrade-224300 │ minikube4\jenkins │ v1.35.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:26 UTC │
│ start │ -p cert-expiration-709700 --memory=3072 --cert-expiration=8760h --driver=docker │ cert-expiration-709700 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:25 UTC │ 28 Dec 25 07:26 UTC │
│ start │ -p missing-upgrade-224300 --memory=3072 --alsologtostderr -v=1 --driver=docker │ missing-upgrade-224300 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:27 UTC │
│ delete │ -p cert-expiration-709700 │ cert-expiration-709700 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:26 UTC │
│ start │ -p test-preload-362600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker │ test-preload-362600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:26 UTC │ 28 Dec 25 07:28 UTC │
│ delete │ -p missing-upgrade-224300 │ missing-upgrade-224300 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:27 UTC │ 28 Dec 25 07:27 UTC │
│ start │ -p running-upgrade-509300 --memory=3072 --vm-driver=docker │ running-upgrade-509300 │ minikube4\jenkins │ v1.35.0 │ 28 Dec 25 07:27 UTC │ │
│ image │ test-preload-362600 image pull ghcr.io/medyagh/image-mirrors/busybox:latest │ test-preload-362600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
│ stop │ -p test-preload-362600 │ test-preload-362600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
│ start │ -p test-preload-362600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker │ test-preload-362600 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ │
│ ssh │ force-systemd-flag-550200 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-550200 │ minikube4\jenkins │ v1.37.0 │ 28 Dec 25 07:28 UTC │ 28 Dec 25 07:28 UTC │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/28 07:28:28
Running on machine: minikube4
Binary: Built with gc go1.25.5 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1228 07:28:28.871886 8696 out.go:360] Setting OutFile to fd 1520 ...
I1228 07:28:28.921737 8696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:28:28.921737 8696 out.go:374] Setting ErrFile to fd 1784...
I1228 07:28:28.921737 8696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:28:28.935146 8696 out.go:368] Setting JSON to false
I1228 07:28:28.938154 8696 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6848,"bootTime":1766900060,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W1228 07:28:28.938154 8696 start.go:141] gopshost.Virtualization returned error: not implemented yet
I1228 07:28:28.942149 8696 out.go:179] * [test-preload-362600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
I1228 07:28:28.946142 8696 notify.go:221] Checking for updates...
I1228 07:28:28.947149 8696 out.go:179] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I1228 07:28:28.950144 8696 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:28:28.953146 8696 out.go:179] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I1228 07:28:28.956143 8696 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:28:28.960148 8696 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:28:28.962148 8696 config.go:182] Loaded profile config "test-preload-362600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 07:28:28.963144 8696 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:28:29.073154 8696 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
I1228 07:28:29.076149 8696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:28:29.306079 8696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:28:29.288331579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1228 07:28:29.309076 8696 out.go:179] * Using the docker driver based on existing profile
I1228 07:28:29.313077 8696 start.go:309] selected driver: docker
I1228 07:28:29.313077 8696 start.go:928] validating driver "docker" against &{Name:test-preload-362600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-362600 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:28:29.314080 8696 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:28:29.320085 8696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:28:29.566908 8696 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-28 07:28:29.547706565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1228 07:28:29.566908 8696 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1228 07:28:29.566908 8696 cni.go:84] Creating CNI manager for ""
I1228 07:28:29.566908 8696 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:28:29.567906 8696 start.go:353] cluster config:
{Name:test-preload-362600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:test-preload-362600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:28:29.570905 8696 out.go:179] * Starting "test-preload-362600" primary control-plane node in "test-preload-362600" cluster
I1228 07:28:29.575905 8696 cache.go:134] Beginning downloading kic base image for docker with docker
I1228 07:28:29.577904 8696 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:28:29.581904 8696 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 07:28:29.581904 8696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:28:29.581904 8696 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1228 07:28:29.581904 8696 cache.go:65] Caching tarball of preloaded images
I1228 07:28:29.581904 8696 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1228 07:28:29.582898 8696 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1228 07:28:29.582898 8696 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\test-preload-362600\config.json ...
I1228 07:28:29.654901 8696 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:28:29.654901 8696 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:28:29.654901 8696 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:28:29.654901 8696 start.go:360] acquireMachinesLock for test-preload-362600: {Name:mk0079c10dfd22d58b9f49240ef09a361a7938ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:28:29.654901 8696 start.go:364] duration metric: took 0s to acquireMachinesLock for "test-preload-362600"
I1228 07:28:29.654901 8696 start.go:96] Skipping create...Using existing machine configuration
I1228 07:28:29.654901 8696 fix.go:54] fixHost starting:
I1228 07:28:29.663902 8696 cli_runner.go:164] Run: docker container inspect test-preload-362600 --format={{.State.Status}}
I1228 07:28:29.724898 8696 fix.go:112] recreateIfNeeded on test-preload-362600: state=Stopped err=<nil>
W1228 07:28:29.724898 8696 fix.go:138] unexpected machine state, will restart: <nil>
I1228 07:28:30.829097 10956 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1228 07:28:30.829193 10956 kubeadm.go:319]
I1228 07:28:30.829521 10956 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:28:30.834292 10956 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:28:30.834969 10956 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:28:30.835305 10956 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:28:30.835470 10956 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1228 07:28:30.835661 10956 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1228 07:28:30.835885 10956 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1228 07:28:30.836054 10956 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1228 07:28:30.836105 10956 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1228 07:28:30.836845 10956 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1228 07:28:30.836935 10956 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1228 07:28:30.837045 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1228 07:28:30.837188 10956 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1228 07:28:30.837348 10956 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1228 07:28:30.837367 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1228 07:28:30.838073 10956 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1228 07:28:30.838144 10956 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1228 07:28:30.838754 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1228 07:28:30.838917 10956 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1228 07:28:30.839077 10956 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:28:30.839105 10956 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:28:30.839643 10956 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:28:30.839812 10956 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:28:30.840092 10956 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:28:30.840238 10956 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:28:30.840388 10956 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:28:30.840415 10956 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:28:30.840415 10956 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:28:30.840415 10956 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:28:30.841442 10956 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:28:30.841442 10956 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:28:30.845025 10956 out.go:252] - Generating certificates and keys ...
I1228 07:28:30.845350 10956 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1228 07:28:30.845413 10956 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1228 07:28:30.846025 10956 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1228 07:28:30.846065 10956 kubeadm.go:319] [certs] Using the existing "sa" key
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:28:30.846707 10956 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:28:30.847386 10956 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:28:30.847386 10956 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:28:30.862925 10956 out.go:252] - Booting up control plane ...
I1228 07:28:30.863263 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:28:30.863453 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:28:30.863599 10956 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:28:30.863920 10956 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:28:30.864159 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:28:30.864402 10956 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:28:30.864711 10956 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:28:30.864711 10956 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:28:30.864711 10956 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:28:30.865367 10956 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:28:30.865547 10956 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001175587s
I1228 07:28:30.865547 10956 kubeadm.go:319]
I1228 07:28:30.865705 10956 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:28:30.865843 10956 kubeadm.go:319] - The kubelet is not running
I1228 07:28:30.866152 10956 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:28:30.866189 10956 kubeadm.go:319]
I1228 07:28:30.866348 10956 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:28:30.866348 10956 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:28:30.866348 10956 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:28:30.866348 10956 kubeadm.go:319]
I1228 07:28:30.866348 10956 kubeadm.go:403] duration metric: took 8m4.1856378s to StartCluster
I1228 07:28:30.871002 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.892372 10956 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.896535 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.913072 10956 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.918213 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.943813 10956 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.950809 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:30.974259 10956 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:30Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:30.980208 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.009939 10956 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.015235 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.040595 10956 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.046595 10956 ssh_runner.go:195] Run: sudo runc list -f json
E1228 07:28:31.070125 10956 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:28:31Z" level=error msg="open /run/runc: no such file or directory"
I1228 07:28:31.070125 10956 logs.go:123] Gathering logs for kubelet ...
I1228 07:28:31.070125 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:28:31.144442 10956 logs.go:123] Gathering logs for dmesg ...
I1228 07:28:31.145427 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:28:31.190901 10956 logs.go:123] Gathering logs for describe nodes ...
I1228 07:28:31.190901 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:28:31.281419 10956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:28:31.270801 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272185 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272956 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.276447 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.277504 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:28:31.270801 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272185 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.272956 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.276447 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:31.277504 10336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1228 07:28:31.281419 10956 logs.go:123] Gathering logs for Docker ...
I1228 07:28:31.281419 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1228 07:28:31.314672 10956 logs.go:123] Gathering logs for container status ...
I1228 07:28:31.314672 10956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1228 07:28:31.381927 10956 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.381958 10956 out.go:285] *
W1228 07:28:31.381958 10956 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.381958 10956 out.go:285] *
W1228 07:28:31.381958 10956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:28:31.388026 10956 out.go:203]
W1228 07:28:31.391609 10956 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001175587s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:28:31.391609 10956 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:28:31.391609 10956 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:28:31.394315 10956 out.go:203]
I1228 07:28:31.944635 10488 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I1228 07:28:31.944635 10488 kubeadm.go:310] [preflight] Running pre-flight checks
I1228 07:28:31.944635 10488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:28:31.944635 10488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:28:31.945626 10488 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:28:31.945626 10488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:28:31.947629 10488 out.go:235] - Generating certificates and keys ...
I1228 07:28:31.947629 10488 kubeadm.go:310] [certs] Using existing ca certificate authority
I1228 07:28:31.947629 10488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-509300] and IPs [192.168.103.2 127.0.0.1 ::1]
I1228 07:28:31.948621 10488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1228 07:28:31.949625 10488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-509300] and IPs [192.168.103.2 127.0.0.1 ::1]
I1228 07:28:31.949625 10488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1228 07:28:31.949625 10488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1228 07:28:31.949625 10488 kubeadm.go:310] [certs] Generating "sa" key and public key
I1228 07:28:31.949625 10488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:28:31.949625 10488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:28:31.949625 10488 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:28:31.949625 10488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:28:31.950630 10488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:28:31.950630 10488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:28:31.950630 10488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:28:31.950630 10488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:28:31.952631 10488 out.go:235] - Booting up control plane ...
I1228 07:28:31.952631 10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:28:31.953636 10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:28:31.953636 10488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:28:31.953636 10488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:28:31.953636 10488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:28:31.953636 10488 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1228 07:28:31.953636 10488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:28:31.954653 10488 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:28:31.954653 10488 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002997436s
I1228 07:28:31.954653 10488 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1228 07:28:31.954653 10488 kubeadm.go:310] [api-check] The API server is healthy after 7.002683961s
I1228 07:28:31.954653 10488 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1228 07:28:31.955631 10488 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1228 07:28:31.955631 10488 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1228 07:28:31.955631 10488 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-509300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1228 07:28:31.955631 10488 kubeadm.go:310] [bootstrap-token] Using token: nn6gwz.03ll6pyso0maxojd
I1228 07:28:31.959618 10488 out.go:235] - Configuring RBAC rules ...
I1228 07:28:31.959618 10488 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1228 07:28:31.959618 10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1228 07:28:31.959618 10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1228 07:28:31.960625 10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1228 07:28:31.960625 10488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1228 07:28:31.960625 10488 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1228 07:28:31.960625 10488 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1228 07:28:31.960625 10488 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1228 07:28:31.961635 10488 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1228 07:28:31.961635 10488 kubeadm.go:310]
I1228 07:28:31.961635 10488 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1228 07:28:31.961635 10488 kubeadm.go:310]
I1228 07:28:31.961635 10488 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1228 07:28:31.961635 10488 kubeadm.go:310]
I1228 07:28:31.961635 10488 kubeadm.go:310] mkdir -p $HOME/.kube
I1228 07:28:31.961635 10488 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1228 07:28:31.961635 10488 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1228 07:28:31.961635 10488 kubeadm.go:310]
I1228 07:28:31.961635 10488 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1228 07:28:31.961635 10488 kubeadm.go:310]
I1228 07:28:31.962640 10488 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1228 07:28:31.962640 10488 kubeadm.go:310]
I1228 07:28:31.962640 10488 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1228 07:28:31.962640 10488 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1228 07:28:31.962640 10488 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1228 07:28:31.962640 10488 kubeadm.go:310]
I1228 07:28:31.962640 10488 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1228 07:28:31.962640 10488 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1228 07:28:31.962640 10488 kubeadm.go:310]
I1228 07:28:31.963645 10488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nn6gwz.03ll6pyso0maxojd \
I1228 07:28:31.963645 10488 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4 \
I1228 07:28:31.963645 10488 kubeadm.go:310] --control-plane
I1228 07:28:31.963645 10488 kubeadm.go:310]
I1228 07:28:31.963645 10488 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1228 07:28:31.963645 10488 kubeadm.go:310]
I1228 07:28:31.963645 10488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nn6gwz.03ll6pyso0maxojd \
I1228 07:28:31.963645 10488 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:3fea1b033220c76616a69daefe9de210d60574273e9df21e09282f95b8582ae4
I1228 07:28:31.964648 10488 cni.go:84] Creating CNI manager for ""
I1228 07:28:31.964648 10488 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1228 07:28:31.967635 10488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1228 07:28:31.976634 10488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1228 07:28:32.033654 10488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1228 07:28:32.131098 10488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1228 07:28:32.143378 10488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:28:32.145892 10488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-509300 minikube.k8s.io/updated_at=2025_12_28T07_28_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed-dirty minikube.k8s.io/name=running-upgrade-509300 minikube.k8s.io/primary=true
I1228 07:28:32.150406 10488 ops.go:34] apiserver oom_adj: -16
I1228 07:28:32.338382 10488 kubeadm.go:1113] duration metric: took 207.0472ms to wait for elevateKubeSystemPrivileges
I1228 07:28:32.373410 10488 kubeadm.go:394] duration metric: took 12.9308811s to StartCluster
I1228 07:28:32.373544 10488 settings.go:142] acquiring lock: {Name:mkac923b109dc030b95783d9963c0a5b20048f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:28:32.373594 10488 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube4\AppData\Local\Temp\legacy_kubeconfig572837400
I1228 07:28:32.376045 10488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\AppData\Local\Temp\legacy_kubeconfig572837400: {Name:mk13db8e9f6987c9bdb728c51e105117f28b0fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:28:32.378156 10488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1228 07:28:32.378156 10488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1228 07:28:32.378276 10488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1228 07:28:32.378352 10488 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-509300"
I1228 07:28:32.378352 10488 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-509300"
I1228 07:28:32.378352 10488 addons.go:238] Setting addon storage-provisioner=true in "running-upgrade-509300"
I1228 07:28:32.378352 10488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-509300"
I1228 07:28:32.378352 10488 host.go:66] Checking if "running-upgrade-509300" exists ...
I1228 07:28:32.378352 10488 config.go:182] Loaded profile config "running-upgrade-509300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1228 07:28:32.383435 10488 out.go:177] * Verifying Kubernetes components...
I1228 07:28:32.394845 10488 cli_runner.go:164] Run: docker container inspect running-upgrade-509300 --format={{.State.Status}}
I1228 07:28:32.394845 10488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:28:32.395355 10488 cli_runner.go:164] Run: docker container inspect running-upgrade-509300 --format={{.State.Status}}
I1228 07:28:32.463878 10488 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
==> Docker <==
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497567320Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497608424Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497618625Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497624226Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497629826Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497653729Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.497688732Z" level=info msg="Initializing buildkit"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.620259180Z" level=info msg="Completed buildkit initialization"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636274093Z" level=info msg="Daemon has completed initialization"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636499816Z" level=info msg="API listen on /run/docker.sock"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636559822Z" level=info msg="API listen on /var/run/docker.sock"
Dec 28 07:20:23 force-systemd-flag-550200 dockerd[1187]: time="2025-12-28T07:20:23.636628329Z" level=info msg="API listen on [::]:2376"
Dec 28 07:20:23 force-systemd-flag-550200 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 28 07:20:24 force-systemd-flag-550200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Start docker client with request timeout 0s"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Hairpin mode is set to hairpin-veth"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Loaded network plugin cni"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Docker cri networking managed by network plugin cni"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Setting cgroupDriver systemd"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Dec 28 07:20:24 force-systemd-flag-550200 cri-dockerd[1478]: time="2025-12-28T07:20:24Z" level=info msg="Start cri-dockerd grpc backend"
Dec 28 07:20:24 force-systemd-flag-550200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:28:34.180540 10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:34.181519 10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:34.182578 10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:34.183915 10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:28:34.184878 10560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[ +0.926377] CPU: 4 PID: 252615 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
[ +0.000003] RIP: 0033:0x7f37543beb20
[ +0.000007] Code: Unable to access opcode bytes at RIP 0x7f37543beaf6.
[ +0.000001] RSP: 002b:00007ffed12f1230 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
[ +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ +0.000001] FS: 0000000000000000 GS: 0000000000000000
[Dec28 07:27] tmpfs: Unknown parameter 'noswap'
[ +7.486264] tmpfs: Unknown parameter 'noswap'
[ +0.657209] tmpfs: Unknown parameter 'noswap'
[Dec28 07:28] tmpfs: Unknown parameter 'noswap'
[ +7.727880] CPU: 12 PID: 266434 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
[ +0.000004] RIP: 0033:0x7fc4c0c23b20
[ +0.000008] Code: Unable to access opcode bytes at RIP 0x7fc4c0c23af6.
[ +0.000001] RSP: 002b:00007ffd4d85e4f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
[ +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ +0.000005] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ +0.000002] FS: 0000000000000000 GS: 0000000000000000
[ +0.675643] tmpfs: Unknown parameter 'noswap'
==> kernel <==
07:28:34 up 1:53, 0 user, load average: 3.98, 3.46, 2.77
Linux force-systemd-flag-550200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:31 force-systemd-flag-550200 kubelet[10370]: E1228 07:28:31.794031 10370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:28:31 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:32 force-systemd-flag-550200 kubelet[10415]: E1228 07:28:32.555390 10415 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:28:32 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:33 force-systemd-flag-550200 kubelet[10441]: E1228 07:28:33.287179 10441 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:33 force-systemd-flag-550200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:28:34 force-systemd-flag-550200 kubelet[10520]: E1228 07:28:34.067339 10520 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:28:34 force-systemd-flag-550200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:28:34 force-systemd-flag-550200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-550200 -n force-systemd-flag-550200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-550200 -n force-systemd-flag-550200: exit status 6 (703.7292ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:28:35.089533 13432 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-550200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-550200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-550200" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-windows-amd64.exe delete -p force-systemd-flag-550200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-550200: (2.8537129s)
--- FAIL: TestForceSystemdFlag (568.11s)