=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 109 (9m14.0183265s)
-- stdout --
* [force-systemd-flag-637800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22332
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "force-systemd-flag-637800" primary control-plane node in "force-systemd-flag-637800" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
-- /stdout --
** stderr **
I1227 20:44:45.920893 8368 out.go:360] Setting OutFile to fd 696 ...
I1227 20:44:45.982890 8368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:44:45.982890 8368 out.go:374] Setting ErrFile to fd 1980...
I1227 20:44:45.982890 8368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:44:45.996889 8368 out.go:368] Setting JSON to false
I1227 20:44:45.998895 8368 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3672,"bootTime":1766864613,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W1227 20:44:45.998895 8368 start.go:141] gopshost.Virtualization returned error: not implemented yet
I1227 20:44:46.002886 8368 out.go:179] * [force-systemd-flag-637800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
I1227 20:44:46.005884 8368 notify.go:221] Checking for updates...
I1227 20:44:46.006891 8368 out.go:179] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I1227 20:44:46.008888 8368 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 20:44:46.011888 8368 out.go:179] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I1227 20:44:46.014886 8368 out.go:179] - MINIKUBE_LOCATION=22332
I1227 20:44:46.021885 8368 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 20:44:46.028892 8368 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 20:44:46.173537 8368 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
I1227 20:44:46.176536 8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:44:46.625093 8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-27 20:44:46.598321745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:44:46.630087 8368 out.go:179] * Using the docker driver based on user configuration
I1227 20:44:46.635085 8368 start.go:309] selected driver: docker
I1227 20:44:46.635085 8368 start.go:928] validating driver "docker" against <nil>
I1227 20:44:46.635085 8368 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 20:44:46.643095 8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:44:47.082612 8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-27 20:44:47.065017899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:44:47.082612 8368 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 20:44:47.083612 8368 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 20:44:47.586928 8368 out.go:179] * Using Docker Desktop driver with root privileges
I1227 20:44:47.627883 8368 cni.go:84] Creating CNI manager for ""
I1227 20:44:47.627883 8368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 20:44:47.627974 8368 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1227 20:44:47.628211 8368 start.go:353] cluster config:
{Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:44:47.647641 8368 out.go:179] * Starting "force-systemd-flag-637800" primary control-plane node in "force-systemd-flag-637800" cluster
I1227 20:44:47.689298 8368 cache.go:134] Beginning downloading kic base image for docker with docker
I1227 20:44:47.746850 8368 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 20:44:47.786355 8368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 20:44:47.786682 8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 20:44:47.786825 8368 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1227 20:44:47.786825 8368 cache.go:65] Caching tarball of preloaded images
I1227 20:44:47.786825 8368 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 20:44:47.786825 8368 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 20:44:47.787410 8368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json ...
I1227 20:44:47.787410 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json: {Name:mk5ebd4e14f5837357f270b7883f6c7cd5b53f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:44:47.862081 8368 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 20:44:47.862081 8368 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 20:44:47.862081 8368 cache.go:243] Successfully downloaded all kic artifacts
I1227 20:44:47.862081 8368 start.go:360] acquireMachinesLock for force-systemd-flag-637800: {Name:mk4fea70227937b59b139a887f8b0cb3d2cd6442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 20:44:47.863093 8368 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-637800"
I1227 20:44:47.863093 8368 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 20:44:47.863093 8368 start.go:125] createHost starting for "" (driver="docker")
I1227 20:44:47.880072 8368 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 20:44:47.880072 8368 start.go:159] libmachine.API.Create for "force-systemd-flag-637800" (driver="docker")
I1227 20:44:47.880072 8368 client.go:173] LocalClient.Create starting
I1227 20:44:47.881099 8368 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
I1227 20:44:47.881099 8368 main.go:144] libmachine: Decoding PEM data...
I1227 20:44:47.881099 8368 main.go:144] libmachine: Parsing certificate...
I1227 20:44:47.881099 8368 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
I1227 20:44:47.881099 8368 main.go:144] libmachine: Decoding PEM data...
I1227 20:44:47.881099 8368 main.go:144] libmachine: Parsing certificate...
I1227 20:44:47.887066 8368 cli_runner.go:164] Run: docker network inspect force-systemd-flag-637800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:44:47.940066 8368 cli_runner.go:211] docker network inspect force-systemd-flag-637800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:44:47.944063 8368 network_create.go:284] running [docker network inspect force-systemd-flag-637800] to gather additional debugging logs...
I1227 20:44:47.944063 8368 cli_runner.go:164] Run: docker network inspect force-systemd-flag-637800
W1227 20:44:47.998085 8368 cli_runner.go:211] docker network inspect force-systemd-flag-637800 returned with exit code 1
I1227 20:44:47.998085 8368 network_create.go:287] error running [docker network inspect force-systemd-flag-637800]: docker network inspect force-systemd-flag-637800: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-637800 not found
I1227 20:44:47.998085 8368 network_create.go:289] output of [docker network inspect force-systemd-flag-637800]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-637800 not found
** /stderr **
I1227 20:44:48.003078 8368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:44:48.072066 8368 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:44:48.103085 8368 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:44:48.135070 8368 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:44:48.167070 8368 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:44:48.185069 8368 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00173c930}
I1227 20:44:48.185069 8368 network_create.go:124] attempt to create docker network force-systemd-flag-637800 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1227 20:44:48.189082 8368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-637800 force-systemd-flag-637800
I1227 20:44:48.404084 8368 network_create.go:108] docker network force-systemd-flag-637800 192.168.85.0/24 created
I1227 20:44:48.404084 8368 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-637800" container
I1227 20:44:48.412080 8368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 20:44:48.465075 8368 cli_runner.go:164] Run: docker volume create force-systemd-flag-637800 --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true
I1227 20:44:48.520075 8368 oci.go:103] Successfully created a docker volume force-systemd-flag-637800
I1227 20:44:48.525073 8368 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-637800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --entrypoint /usr/bin/test -v force-systemd-flag-637800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 20:44:50.649852 8368 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-637800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --entrypoint /usr/bin/test -v force-systemd-flag-637800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (2.124756s)
I1227 20:44:50.649852 8368 oci.go:107] Successfully prepared a docker volume force-systemd-flag-637800
I1227 20:44:50.649852 8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 20:44:50.649852 8368 kic.go:194] Starting extracting preloaded images to volume ...
I1227 20:44:50.653860 8368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-637800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 20:45:37.448145 8368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-637800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (46.7927616s)
I1227 20:45:37.449148 8368 kic.go:203] duration metric: took 46.7987712s to extract preloaded images to volume ...
I1227 20:45:37.455146 8368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:45:37.857805 8368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:89 SystemTime:2025-12-27 20:45:37.837600918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:45:37.861810 8368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 20:45:38.272921 8368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-637800 --name force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-637800 --network force-systemd-flag-637800 --ip 192.168.85.2 --volume force-systemd-flag-637800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 20:45:39.981024 8368 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-637800 --name force-systemd-flag-637800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-637800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-637800 --network force-systemd-flag-637800 --ip 192.168.85.2 --volume force-systemd-flag-637800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a: (1.7080833s)
I1227 20:45:39.987019 8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Running}}
I1227 20:45:40.069325 8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
I1227 20:45:40.146347 8368 cli_runner.go:164] Run: docker exec force-systemd-flag-637800 stat /var/lib/dpkg/alternatives/iptables
I1227 20:45:40.293379 8368 oci.go:144] the created container "force-systemd-flag-637800" has a running status.
I1227 20:45:40.293379 8368 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa...
I1227 20:45:40.323338 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 20:45:40.336351 8368 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 20:45:40.448317 8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
I1227 20:45:40.528305 8368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 20:45:40.528305 8368 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-637800 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 20:45:40.680318 8368 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa...
I1227 20:45:43.749971 8368 cli_runner.go:164] Run: docker container inspect force-systemd-flag-637800 --format={{.State.Status}}
I1227 20:45:43.816978 8368 machine.go:94] provisionDockerMachine start ...
I1227 20:45:43.822980 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:43.896972 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:43.910971 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:43.910971 8368 main.go:144] libmachine: About to run SSH command:
hostname
I1227 20:45:44.091696 8368 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-637800
I1227 20:45:44.091742 8368 ubuntu.go:182] provisioning hostname "force-systemd-flag-637800"
I1227 20:45:44.097956 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:44.158858 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:44.158858 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:44.158858 8368 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-637800 && echo "force-systemd-flag-637800" | sudo tee /etc/hostname
I1227 20:45:44.366740 8368 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-637800
I1227 20:45:44.370730 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:44.425744 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:44.426743 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:44.426743 8368 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-637800' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-637800/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-637800' | sudo tee -a /etc/hosts;
fi
fi
I1227 20:45:44.601883 8368 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 20:45:44.601883 8368 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
I1227 20:45:44.601883 8368 ubuntu.go:190] setting up certificates
I1227 20:45:44.601883 8368 provision.go:84] configureAuth start
I1227 20:45:44.606878 8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
I1227 20:45:44.671293 8368 provision.go:143] copyHostCerts
I1227 20:45:44.671383 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
I1227 20:45:44.671383 8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
I1227 20:45:44.671383 8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
I1227 20:45:44.671927 8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
I1227 20:45:44.672872 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
I1227 20:45:44.672872 8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
I1227 20:45:44.672872 8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
I1227 20:45:44.672872 8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
I1227 20:45:44.673715 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
I1227 20:45:44.673715 8368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
I1227 20:45:44.673715 8368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
I1227 20:45:44.674600 8368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
I1227 20:45:44.675643 8368 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-637800 san=[127.0.0.1 192.168.85.2 force-systemd-flag-637800 localhost minikube]
I1227 20:45:44.927042 8368 provision.go:177] copyRemoteCerts
I1227 20:45:44.932043 8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 20:45:44.936043 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:44.989056 8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
I1227 20:45:45.122632 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I1227 20:45:45.122632 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 20:45:45.157375 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I1227 20:45:45.157375 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
I1227 20:45:45.199605 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I1227 20:45:45.199605 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 20:45:45.227510 8368 provision.go:87] duration metric: took 625.6196ms to configureAuth
I1227 20:45:45.227510 8368 ubuntu.go:206] setting minikube options for container-runtime
I1227 20:45:45.228505 8368 config.go:182] Loaded profile config "force-systemd-flag-637800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:45:45.232509 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:45.289516 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:45.289516 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:45.289516 8368 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 20:45:45.465784 8368 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1227 20:45:45.465845 8368 ubuntu.go:71] root file system type: overlay
I1227 20:45:45.466000 8368 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 20:45:45.469796 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:45.523648 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:45.523648 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:45.523648 8368 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 20:45:45.694975 8368 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 20:45:45.698935 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:45.757922 8368 main.go:144] libmachine: Using SSH client type: native
I1227 20:45:45.758282 8368 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 59660 <nil> <nil>}
I1227 20:45:45.758282 8368 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 20:45:48.338984 8368 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:48:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-27 20:45:45.681571544 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1227 20:45:48.339074 8368 machine.go:97] duration metric: took 4.5220458s to provisionDockerMachine
I1227 20:45:48.339074 8368 client.go:176] duration metric: took 1m0.4583257s to LocalClient.Create
I1227 20:45:48.339171 8368 start.go:167] duration metric: took 1m0.4584219s to libmachine.API.Create "force-systemd-flag-637800"
I1227 20:45:48.339231 8368 start.go:293] postStartSetup for "force-systemd-flag-637800" (driver="docker")
I1227 20:45:48.339286 8368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 20:45:48.347419 8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 20:45:48.354032 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:48.407854 8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
I1227 20:45:48.524848 8368 ssh_runner.go:195] Run: cat /etc/os-release
I1227 20:45:48.531860 8368 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 20:45:48.531860 8368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 20:45:48.531860 8368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
I1227 20:45:48.531860 8368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
I1227 20:45:48.532863 8368 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> 136562.pem in /etc/ssl/certs
I1227 20:45:48.532863 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /etc/ssl/certs/136562.pem
I1227 20:45:48.539861 8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 20:45:48.551857 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /etc/ssl/certs/136562.pem (1708 bytes)
I1227 20:45:48.585854 8368 start.go:296] duration metric: took 246.62ms for postStartSetup
I1227 20:45:48.591856 8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
I1227 20:45:48.646856 8368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\config.json ...
I1227 20:45:48.654852 8368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 20:45:48.660853 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:48.716856 8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
I1227 20:45:48.851956 8368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 20:45:48.866124 8368 start.go:128] duration metric: took 1m1.0023474s to createHost
I1227 20:45:48.866124 8368 start.go:83] releasing machines lock for "force-systemd-flag-637800", held for 1m1.0023474s
I1227 20:45:48.869695 8368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-637800
I1227 20:45:48.920690 8368 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I1227 20:45:48.924704 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:48.924704 8368 ssh_runner.go:195] Run: cat /version.json
I1227 20:45:48.927693 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:48.983710 8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
I1227 20:45:48.984703 8368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59660 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-637800\id_rsa Username:docker}
W1227 20:45:49.091146 8368 start.go:879] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:
stderr:
bash: line 1: curl.exe: command not found
I1227 20:45:49.097074 8368 ssh_runner.go:195] Run: systemctl --version
I1227 20:45:49.114049 8368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 20:45:49.123063 8368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 20:45:49.128052 8368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 20:45:49.181056 8368 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1227 20:45:49.181056 8368 start.go:496] detecting cgroup driver to use...
I1227 20:45:49.181056 8368 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 20:45:49.181056 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 20:45:49.209058 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
W1227 20:45:49.224065 8368 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
W1227 20:45:49.224065 8368 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I1227 20:45:49.230057 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 20:45:49.244061 8368 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 20:45:49.248049 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 20:45:49.267051 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:45:49.288058 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 20:45:49.307052 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:45:49.324071 8368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 20:45:49.343053 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 20:45:49.361051 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 20:45:49.381053 8368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 20:45:49.399067 8368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 20:45:49.416074 8368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 20:45:49.433051 8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:45:49.596936 8368 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 20:45:49.788307 8368 start.go:496] detecting cgroup driver to use...
I1227 20:45:49.788307 8368 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 20:45:49.795301 8368 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 20:45:49.824310 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 20:45:49.846295 8368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 20:45:49.947797 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 20:45:49.974075 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 20:45:49.993081 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 20:45:50.019086 8368 ssh_runner.go:195] Run: which cri-dockerd
I1227 20:45:50.031092 8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 20:45:50.044078 8368 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 20:45:50.070069 8368 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 20:45:50.238421 8368 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 20:45:50.404739 8368 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 20:45:50.404739 8368 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 20:45:50.527688 8368 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 20:45:50.556809 8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:45:50.763676 8368 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 20:45:51.820290 8368 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0566026s)
I1227 20:45:51.823616 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 20:45:51.846852 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 20:45:51.869195 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 20:45:51.892137 8368 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 20:45:52.040646 8368 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 20:45:52.180010 8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:45:52.331942 8368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 20:45:52.357934 8368 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 20:45:52.379935 8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:45:52.488834 8368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 20:45:52.606921 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 20:45:52.626419 8368 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 20:45:52.630413 8368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 20:45:52.637418 8368 start.go:574] Will wait 60s for crictl version
I1227 20:45:52.640405 8368 ssh_runner.go:195] Run: which crictl
I1227 20:45:52.652423 8368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 20:45:52.699028 8368 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1227 20:45:52.702039 8368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 20:45:52.755278 8368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 20:45:52.813153 8368 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1227 20:45:52.817748 8368 cli_runner.go:164] Run: docker exec -t force-systemd-flag-637800 dig +short host.docker.internal
I1227 20:45:52.962809 8368 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I1227 20:45:52.966825 8368 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I1227 20:45:52.973818 8368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:45:52.994806 8368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-637800
I1227 20:45:53.047818 8368 kubeadm.go:884] updating cluster {Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 20:45:53.047818 8368 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 20:45:53.051814 8368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 20:45:53.085490 8368 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 20:45:53.085536 8368 docker.go:624] Images already preloaded, skipping extraction
I1227 20:45:53.089980 8368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 20:45:53.126005 8368 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 20:45:53.126005 8368 cache_images.go:86] Images are preloaded, skipping loading
I1227 20:45:53.126005 8368 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
I1227 20:45:53.126005 8368 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-637800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 20:45:53.129982 8368 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 20:45:53.204979 8368 cni.go:84] Creating CNI manager for ""
I1227 20:45:53.204979 8368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 20:45:53.204979 8368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 20:45:53.204979 8368 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-637800 NodeName:force-systemd-flag-637800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 20:45:53.205979 8368 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-637800"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 20:45:53.209989 8368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 20:45:53.222979 8368 binaries.go:51] Found k8s binaries, skipping transfer
I1227 20:45:53.226975 8368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 20:45:53.240986 8368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1227 20:45:53.261114 8368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 20:45:53.285444 8368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1227 20:45:53.312181 8368 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1227 20:45:53.319745 8368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:45:53.338745 8368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:45:53.490194 8368 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 20:45:53.512335 8368 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800 for IP: 192.168.85.2
I1227 20:45:53.512335 8368 certs.go:195] generating shared ca certs ...
I1227 20:45:53.512335 8368 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.513171 8368 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
I1227 20:45:53.513171 8368 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
I1227 20:45:53.513171 8368 certs.go:257] generating profile certs ...
I1227 20:45:53.514122 8368 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key
I1227 20:45:53.514420 8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt with IP's: []
I1227 20:45:53.583995 8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt ...
I1227 20:45:53.583995 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.crt: {Name:mk5ae8d8bc510c098f8c076201617d960d137d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.585093 8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key ...
I1227 20:45:53.586006 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\client.key: {Name:mkb67d769000f92c5919e771c804ef4e1ae7c469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.587013 8368 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14
I1227 20:45:53.587013 8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1227 20:45:53.713877 8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 ...
I1227 20:45:53.713877 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14: {Name:mkde3590f25df3075aba03614a6d757a7100a23d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.714882 8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14 ...
I1227 20:45:53.714882 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14: {Name:mk70c9faccd859488cfdafb43a0e0791895e7add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.715882 8368 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt.81dfed14 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt
I1227 20:45:53.732888 8368 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key.81dfed14 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key
I1227 20:45:53.733887 8368 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key
I1227 20:45:53.733887 8368 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt with IP's: []
I1227 20:45:53.851820 8368 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt ...
I1227 20:45:53.851820 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt: {Name:mk00719c9a9b789569dd3aa2fef5e42c4e1ded43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.852821 8368 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key ...
I1227 20:45:53.852821 8368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key: {Name:mk246c37ea059f9862608a5865bbacf9edb46773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:45:53.852821 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 20:45:53.852821 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I1227 20:45:53.852821 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 20:45:53.853846 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 20:45:53.853846 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 20:45:53.853846 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 20:45:53.853846 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 20:45:53.864361 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 20:45:53.864515 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem (1338 bytes)
W1227 20:45:53.865174 8368 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656_empty.pem, impossibly tiny 0 bytes
I1227 20:45:53.865174 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I1227 20:45:53.865799 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I1227 20:45:53.865941 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I1227 20:45:53.866198 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
I1227 20:45:53.866424 8368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem (1708 bytes)
I1227 20:45:53.866954 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 20:45:53.867001 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem -> /usr/share/ca-certificates/13656.pem
I1227 20:45:53.867132 8368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem -> /usr/share/ca-certificates/136562.pem
I1227 20:45:53.867811 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 20:45:53.900367 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 20:45:53.929531 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 20:45:53.960431 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 20:45:53.986425 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 20:45:54.019454 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1227 20:45:54.047451 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 20:45:54.077296 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-637800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1227 20:45:54.106294 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 20:45:54.134287 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\13656.pem --> /usr/share/ca-certificates/13656.pem (1338 bytes)
I1227 20:45:54.163289 8368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\136562.pem --> /usr/share/ca-certificates/136562.pem (1708 bytes)
I1227 20:45:54.190284 8368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 20:45:54.213298 8368 ssh_runner.go:195] Run: openssl version
I1227 20:45:54.227284 8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 20:45:54.243304 8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 20:45:54.259288 8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 20:45:54.266290 8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:57 /usr/share/ca-certificates/minikubeCA.pem
I1227 20:45:54.271291 8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 20:45:54.320866 8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 20:45:54.336869 8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 20:45:54.355325 8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13656.pem
I1227 20:45:54.371330 8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13656.pem /etc/ssl/certs/13656.pem
I1227 20:45:54.389308 8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13656.pem
I1227 20:45:54.396309 8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:04 /usr/share/ca-certificates/13656.pem
I1227 20:45:54.402308 8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13656.pem
I1227 20:45:54.451159 8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 20:45:54.470157 8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13656.pem /etc/ssl/certs/51391683.0
I1227 20:45:54.485154 8368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/136562.pem
I1227 20:45:54.502156 8368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/136562.pem /etc/ssl/certs/136562.pem
I1227 20:45:54.519175 8368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136562.pem
I1227 20:45:54.526158 8368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:04 /usr/share/ca-certificates/136562.pem
I1227 20:45:54.530148 8368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136562.pem
I1227 20:45:54.593161 8368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 20:45:54.615923 8368 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/136562.pem /etc/ssl/certs/3ec20f2e.0
I1227 20:45:54.636440 8368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 20:45:54.644429 8368 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 20:45:54.644429 8368 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-637800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-637800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:45:54.648423 8368 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 20:45:54.684444 8368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 20:45:54.705431 8368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 20:45:54.717422 8368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 20:45:54.722430 8368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 20:45:54.735425 8368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 20:45:54.735425 8368 kubeadm.go:158] found existing configuration files:
I1227 20:45:54.739435 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 20:45:54.752526 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 20:45:54.760571 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 20:45:54.778752 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 20:45:54.795328 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 20:45:54.801434 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 20:45:54.822117 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 20:45:54.835111 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 20:45:54.839110 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 20:45:54.855116 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 20:45:54.867110 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 20:45:54.871110 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 20:45:54.888111 8368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 20:45:55.005940 8368 kubeadm.go:319] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1227 20:45:55.102889 8368 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 20:45:55.230001 8368 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:49:57.186046 8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1227 20:49:57.186185 8368 kubeadm.go:319]
I1227 20:49:57.186185 8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 20:49:57.190844 8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 20:49:57.190844 8368 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 20:49:57.191375 8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 20:49:57.191633 8368 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1227 20:49:57.191790 8368 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1227 20:49:57.191894 8368 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1227 20:49:57.191894 8368 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1227 20:49:57.191894 8368 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1227 20:49:57.191894 8368 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1227 20:49:57.191894 8368 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1227 20:49:57.192426 8368 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1227 20:49:57.192530 8368 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1227 20:49:57.192628 8368 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1227 20:49:57.192763 8368 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1227 20:49:57.192918 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1227 20:49:57.193096 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1227 20:49:57.193214 8368 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1227 20:49:57.193319 8368 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1227 20:49:57.193429 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1227 20:49:57.193543 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1227 20:49:57.193695 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1227 20:49:57.193788 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1227 20:49:57.193868 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1227 20:49:57.193940 8368 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1227 20:49:57.194066 8368 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1227 20:49:57.194164 8368 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1227 20:49:57.194229 8368 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1227 20:49:57.194229 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1227 20:49:57.194229 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1227 20:49:57.194229 8368 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 20:49:57.194229 8368 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 20:49:57.194817 8368 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 20:49:57.194879 8368 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 20:49:57.195473 8368 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 20:49:57.195473 8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 20:49:57.195473 8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 20:49:57.195473 8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 20:49:57.196058 8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 20:49:57.200588 8368 out.go:252] - Generating certificates and keys ...
I1227 20:49:57.200588 8368 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 20:49:57.200588 8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 20:49:57.201150 8368 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 20:49:57.201256 8368 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 20:49:57.201256 8368 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 20:49:57.201256 8368 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 20:49:57.201256 8368 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 20:49:57.201821 8368 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 20:49:57.201937 8368 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 20:49:57.202046 8368 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 20:49:57.202046 8368 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 20:49:57.202046 8368 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 20:49:57.202633 8368 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 20:49:57.202633 8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 20:49:57.202633 8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 20:49:57.202633 8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 20:49:57.202633 8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 20:49:57.202633 8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 20:49:57.203207 8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 20:49:57.203361 8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 20:49:57.203361 8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 20:49:57.205132 8368 out.go:252] - Booting up control plane ...
I1227 20:49:57.205680 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 20:49:57.205680 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 20:49:57.205680 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 20:49:57.205680 8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 20:49:57.205680 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 20:49:57.205680 8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 20:49:57.206681 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 20:49:57.206681 8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 20:49:57.206681 8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 20:49:57.206681 8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 20:49:57.207414 8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000173718s
I1227 20:49:57.207414 8368 kubeadm.go:319]
I1227 20:49:57.207414 8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:49:57.207414 8368 kubeadm.go:319] - The kubelet is not running
I1227 20:49:57.207414 8368 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:49:57.207414 8368 kubeadm.go:319]
I1227 20:49:57.207414 8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:49:57.207414 8368 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:49:57.208092 8368 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:49:57.208183 8368 kubeadm.go:319]
W1227 20:49:57.208311 8368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000173718s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-637800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000173718s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1227 20:49:57.211515 8368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1227 20:49:57.665139 8368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 20:49:57.687065 8368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 20:49:57.691329 8368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 20:49:57.704375 8368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 20:49:57.704375 8368 kubeadm.go:158] found existing configuration files:
I1227 20:49:57.708511 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 20:49:57.724433 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 20:49:57.730256 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 20:49:57.748506 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 20:49:57.763257 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 20:49:57.767089 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 20:49:57.784672 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 20:49:57.798565 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 20:49:57.803841 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 20:49:57.820275 8368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 20:49:57.833415 8368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 20:49:57.837021 8368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 20:49:57.854541 8368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 20:49:57.978335 8368 kubeadm.go:319] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1227 20:49:58.063508 8368 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 20:49:58.163766 8368 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:53:59.060878 8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 20:53:59.060994 8368 kubeadm.go:319]
I1227 20:53:59.061027 8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 20:53:59.065962 8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 20:53:59.065962 8368 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 20:53:59.066638 8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 20:53:59.066828 8368 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1227 20:53:59.066998 8368 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1227 20:53:59.067110 8368 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1227 20:53:59.067254 8368 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1227 20:53:59.067424 8368 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1227 20:53:59.067538 8368 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1227 20:53:59.067776 8368 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1227 20:53:59.067885 8368 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1227 20:53:59.067999 8368 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1227 20:53:59.068195 8368 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1227 20:53:59.068359 8368 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1227 20:53:59.068588 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1227 20:53:59.068774 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1227 20:53:59.068950 8368 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1227 20:53:59.069079 8368 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1227 20:53:59.069272 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1227 20:53:59.069840 8368 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1227 20:53:59.070006 8368 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1227 20:53:59.070175 8368 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1227 20:53:59.070302 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 20:53:59.070911 8368 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 20:53:59.071095 8368 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 20:53:59.071295 8368 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 20:53:59.071473 8368 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 20:53:59.071624 8368 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 20:53:59.071846 8368 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 20:53:59.071990 8368 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 20:53:59.072054 8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 20:53:59.072054 8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 20:53:59.072592 8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 20:53:59.072797 8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 20:53:59.074527 8368 out.go:252] - Generating certificates and keys ...
I1227 20:53:59.074527 8368 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 20:53:59.075176 8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 20:53:59.077110 8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 20:53:59.077110 8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 20:53:59.077110 8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 20:53:59.080109 8368 out.go:252] - Booting up control plane ...
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 20:53:59.081109 8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 20:53:59.082108 8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 20:53:59.082108 8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 20:53:59.082108 8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001080211s
I1227 20:53:59.082108 8368 kubeadm.go:319]
I1227 20:53:59.082108 8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:53:59.082108 8368 kubeadm.go:319] - The kubelet is not running
I1227 20:53:59.082108 8368 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:53:59.082108 8368 kubeadm.go:319]
I1227 20:53:59.083110 8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:53:59.083110 8368 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:53:59.083110 8368 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:53:59.083110 8368 kubeadm.go:319]
I1227 20:53:59.083110 8368 kubeadm.go:403] duration metric: took 8m4.4331849s to StartCluster
I1227 20:53:59.083110 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1227 20:53:59.086714 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 20:53:59.167941 8368 cri.go:96] found id: ""
I1227 20:53:59.167941 8368 logs.go:282] 0 containers: []
W1227 20:53:59.167941 8368 logs.go:284] No container was found matching "kube-apiserver"
I1227 20:53:59.167941 8368 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1227 20:53:59.171939 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 20:53:59.218478 8368 cri.go:96] found id: ""
I1227 20:53:59.218478 8368 logs.go:282] 0 containers: []
W1227 20:53:59.218478 8368 logs.go:284] No container was found matching "etcd"
I1227 20:53:59.218478 8368 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1227 20:53:59.226822 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 20:53:59.284237 8368 cri.go:96] found id: ""
I1227 20:53:59.284237 8368 logs.go:282] 0 containers: []
W1227 20:53:59.284237 8368 logs.go:284] No container was found matching "coredns"
I1227 20:53:59.284237 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1227 20:53:59.288231 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 20:53:59.377891 8368 cri.go:96] found id: ""
I1227 20:53:59.377891 8368 logs.go:282] 0 containers: []
W1227 20:53:59.377891 8368 logs.go:284] No container was found matching "kube-scheduler"
I1227 20:53:59.377891 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1227 20:53:59.382906 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 20:53:59.440858 8368 cri.go:96] found id: ""
I1227 20:53:59.440858 8368 logs.go:282] 0 containers: []
W1227 20:53:59.440858 8368 logs.go:284] No container was found matching "kube-proxy"
I1227 20:53:59.440858 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1227 20:53:59.444864 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 20:53:59.494387 8368 cri.go:96] found id: ""
I1227 20:53:59.494387 8368 logs.go:282] 0 containers: []
W1227 20:53:59.494387 8368 logs.go:284] No container was found matching "kube-controller-manager"
I1227 20:53:59.494387 8368 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1227 20:53:59.499982 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 20:53:59.549234 8368 cri.go:96] found id: ""
I1227 20:53:59.549234 8368 logs.go:282] 0 containers: []
W1227 20:53:59.549234 8368 logs.go:284] No container was found matching "kindnet"
I1227 20:53:59.549234 8368 logs.go:123] Gathering logs for kubelet ...
I1227 20:53:59.549234 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 20:53:59.622162 8368 logs.go:123] Gathering logs for dmesg ...
I1227 20:53:59.622162 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 20:53:59.659345 8368 logs.go:123] Gathering logs for describe nodes ...
I1227 20:53:59.659345 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 20:53:59.739538 8368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:53:59.729848 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.730979 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.731673 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.734528 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.735388 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 20:53:59.729848 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.730979 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.731673 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.734528 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.735388 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 20:53:59.739538 8368 logs.go:123] Gathering logs for Docker ...
I1227 20:53:59.739538 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1227 20:53:59.773539 8368 logs.go:123] Gathering logs for container status ...
I1227 20:53:59.773539 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1227 20:53:59.825106 8368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.825106 8368 out.go:285] *
*
W1227 20:53:59.825106 8368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.826106 8368 out.go:285] *
*
W1227 20:53:59.826106 8368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 20:53:59.831107 8368 out.go:203]
W1227 20:53:59.835110 8368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.835110 8368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 20:53:59.835110 8368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 20:53:59.838109 8368 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-637800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-windows-amd64.exe -p force-systemd-flag-637800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 20:54:00.9046817 +0000 UTC m=+3513.036184801
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-637800
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-637800:
-- stdout --
[
{
"Id": "0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0",
"Created": "2025-12-27T20:45:38.326397606Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 178530,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-27T20:45:39.310328144Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
"ResolvConfPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/hostname",
"HostsPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/hosts",
"LogPath": "/var/lib/docker/containers/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0/0daf14eba1572263ad71241828729d2517fdd9925d22383cc68641cec9751df0-json.log",
"Name": "/force-systemd-flag-637800",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-637800:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-637800",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 3221225472,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5-init/diff:/var/lib/docker/overlay2/cc9bc6a1bc34df01fcf2646a74af47280e16e85e4444f747f528eb17ae725d09/diff",
"MergedDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/merged",
"UpperDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/diff",
"WorkDir": "/var/lib/docker/overlay2/1af882482536703d2d0fb7d4938f688f34205856ddeb30211a91c0e05949f9b5/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-637800",
"Source": "/var/lib/docker/volumes/force-systemd-flag-637800/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-637800",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-637800",
"name.minikube.sigs.k8s.io": "force-systemd-flag-637800",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2941a1a36c5e57306f300cc710f321bb5e77b6a482f1684c61dc7ccb3cd4a0cb",
"SandboxKey": "/var/run/docker/netns/2941a1a36c5e",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "59660"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "59661"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "59662"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "59663"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "59664"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-637800": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "9721f24ad9d812e3d6ab44ec6c3549073102d8e033685f5c65cbbafc6107d266",
"EndpointID": "792ee673e615f61a629892909a7414c18bf31f940ece5099a725f7e98f79392a",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-637800",
"0daf14eba157"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-637800 -n force-systemd-flag-637800
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-637800 -n force-systemd-flag-637800: exit status 6 (563.7809ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 20:54:01.497822 2428 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-637800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-windows-amd64.exe -p force-systemd-flag-637800 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-637800 logs -n 25: (2.1029756s)
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ -p cilium-630300 sudo cri-dockerd --version │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo systemctl status containerd --all --full --no-pager │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo systemctl cat containerd --no-pager │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo cat /lib/systemd/system/containerd.service │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo cat /etc/containerd/config.toml │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo containerd config dump │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo systemctl status crio --all --full --no-pager │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo systemctl cat crio --no-pager │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ ssh │ -p cilium-630300 sudo crio config │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ delete │ -p cilium-630300 │ cilium-630300 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
│ start │ -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ │
│ start │ -p NoKubernetes-924000 --memory=3072 --alsologtostderr -v=5 --driver=docker │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
│ start │ -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
│ delete │ -p NoKubernetes-924000 │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ start │ -p NoKubernetes-924000 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ ssh │ -p NoKubernetes-924000 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ │
│ stop │ -p NoKubernetes-924000 │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ start │ -p NoKubernetes-924000 --driver=docker │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ delete │ -p stopped-upgrade-172600 │ stopped-upgrade-172600 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ start │ -p cert-options-955700 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost │ cert-options-955700 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ │
│ ssh │ -p NoKubernetes-924000 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ │
│ delete │ -p NoKubernetes-924000 │ NoKubernetes-924000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
│ start │ -p cert-expiration-978000 --memory=3072 --cert-expiration=3m --driver=docker │ cert-expiration-978000 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ │
│ ssh │ force-systemd-flag-637800 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-637800 │ minikube4\jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/27 20:53:55
Running on machine: minikube4
Binary: Built with gc go1.25.5 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 20:53:55.870774 13396 out.go:360] Setting OutFile to fd 1048 ...
I1227 20:53:55.923649 13396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:53:55.923649 13396 out.go:374] Setting ErrFile to fd 908...
I1227 20:53:55.923649 13396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:53:55.938226 13396 out.go:368] Setting JSON to false
I1227 20:53:55.943220 13396 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4222,"bootTime":1766864613,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"22H2","kernelVersion":"10.0.19045.6691 Build 19045.6691","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
W1227 20:53:55.943220 13396 start.go:141] gopshost.Virtualization returned error: not implemented yet
I1227 20:53:55.947212 13396 out.go:179] * [cert-expiration-978000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 22H2
I1227 20:53:55.953208 13396 notify.go:221] Checking for updates...
I1227 20:53:55.958549 13396 out.go:179] - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
I1227 20:53:55.964592 13396 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 20:53:55.969592 13396 out.go:179] - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
I1227 20:53:55.975586 13396 out.go:179] - MINIKUBE_LOCATION=22332
I1227 20:53:55.979582 13396 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 20:53:55.985592 13396 config.go:182] Loaded profile config "cert-options-955700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:53:55.985592 13396 config.go:182] Loaded profile config "force-systemd-flag-637800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:53:55.986588 13396 config.go:182] Loaded profile config "running-upgrade-127300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1227 20:53:55.986588 13396 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 20:53:56.119586 13396 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
I1227 20:53:56.123644 13396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:53:56.377199 13396 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:105 SystemTime:2025-12-27 20:53:56.351136497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:53:56.379209 13396 out.go:179] * Using the docker driver based on user configuration
I1227 20:53:56.384219 13396 start.go:309] selected driver: docker
I1227 20:53:56.384219 13396 start.go:928] validating driver "docker" against <nil>
I1227 20:53:56.384219 13396 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 20:53:56.390199 13396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:53:56.650705 13396 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-27 20:53:56.631093248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:53:56.650705 13396 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 20:53:56.651720 13396 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 20:53:56.656707 13396 out.go:179] * Using Docker Desktop driver with root privileges
I1227 20:53:56.658715 13396 cni.go:84] Creating CNI manager for ""
I1227 20:53:56.658715 13396 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 20:53:56.658715 13396 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1227 20:53:56.658715 13396 start.go:353] cluster config:
{Name:cert-expiration-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:53:56.660710 13396 out.go:179] * Starting "cert-expiration-978000" primary control-plane node in "cert-expiration-978000" cluster
I1227 20:53:56.667715 13396 cache.go:134] Beginning downloading kic base image for docker with docker
I1227 20:53:56.669714 13396 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 20:53:56.673722 13396 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 20:53:56.673722 13396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 20:53:56.673722 13396 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4
I1227 20:53:56.673722 13396 cache.go:65] Caching tarball of preloaded images
I1227 20:53:56.673722 13396 preload.go:251] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1227 20:53:56.673722 13396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 20:53:56.674716 13396 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-978000\config.json ...
I1227 20:53:56.674716 13396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-978000\config.json: {Name:mkb7d1993c220c17da5cbef47edfd03ae6fead9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:53:56.750724 13396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 20:53:56.750724 13396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 20:53:56.751723 13396 cache.go:243] Successfully downloaded all kic artifacts
I1227 20:53:56.751723 13396 start.go:360] acquireMachinesLock for cert-expiration-978000: {Name:mkf7c69e71f2771f2c30c98cbe8b45870562cec4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 20:53:56.751723 13396 start.go:364] duration metric: took 0s to acquireMachinesLock for "cert-expiration-978000"
I1227 20:53:56.751723 13396 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:cert-expiration-978000 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 20:53:56.751723 13396 start.go:125] createHost starting for "" (driver="docker")
I1227 20:53:55.240579 11092 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-955700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (10.8094115s)
I1227 20:53:55.240579 11092 kic.go:203] duration metric: took 10.8136027s to extract preloaded images to volume ...
I1227 20:53:55.244576 11092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:53:55.477630 11092 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:92 SystemTime:2025-12-27 20:53:55.457364973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
I1227 20:53:55.482211 11092 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 20:53:55.725510 11092 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-955700 --name cert-options-955700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-955700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-955700 --network cert-options-955700 --ip 192.168.76.2 --volume cert-options-955700:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 20:53:56.537169 11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Running}}
I1227 20:53:56.605712 11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
I1227 20:53:56.663709 11092 cli_runner.go:164] Run: docker exec cert-options-955700 stat /var/lib/dpkg/alternatives/iptables
I1227 20:53:56.783166 11092 oci.go:144] the created container "cert-options-955700" has a running status.
I1227 20:53:56.783166 11092 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa...
I1227 20:53:59.060878 8368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 20:53:59.060994 8368 kubeadm.go:319]
I1227 20:53:59.061027 8368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 20:53:59.065962 8368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 20:53:59.065962 8368 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 20:53:59.066638 8368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 20:53:59.066828 8368 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
I1227 20:53:59.066998 8368 kubeadm.go:319] [0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
I1227 20:53:59.067110 8368 kubeadm.go:319] [0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
I1227 20:53:59.067254 8368 kubeadm.go:319] [0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
I1227 20:53:59.067424 8368 kubeadm.go:319] [0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
I1227 20:53:59.067538 8368 kubeadm.go:319] [0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
I1227 20:53:59.067776 8368 kubeadm.go:319] [0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
I1227 20:53:59.067885 8368 kubeadm.go:319] [0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
I1227 20:53:59.067999 8368 kubeadm.go:319] [0;37mCONFIG_INET[0m: [0;32menabled[0m
I1227 20:53:59.068195 8368 kubeadm.go:319] [0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
I1227 20:53:59.068359 8368 kubeadm.go:319] [0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
I1227 20:53:59.068588 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
I1227 20:53:59.068774 8368 kubeadm.go:319] [0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
I1227 20:53:59.068950 8368 kubeadm.go:319] [0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
I1227 20:53:59.069079 8368 kubeadm.go:319] [0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
I1227 20:53:59.069272 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
I1227 20:53:59.069313 8368 kubeadm.go:319] [0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
I1227 20:53:59.069840 8368 kubeadm.go:319] [0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
I1227 20:53:59.070006 8368 kubeadm.go:319] [0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
I1227 20:53:59.070175 8368 kubeadm.go:319] [0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
I1227 20:53:59.070302 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 20:53:59.070375 8368 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 20:53:59.070911 8368 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 20:53:59.071095 8368 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 20:53:59.071295 8368 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 20:53:59.071473 8368 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 20:53:59.071624 8368 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 20:53:59.071846 8368 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 20:53:59.071990 8368 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 20:53:59.072054 8368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 20:53:59.072054 8368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 20:53:59.072592 8368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 20:53:59.072797 8368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 20:53:59.074527 8368 out.go:252] - Generating certificates and keys ...
I1227 20:53:59.074527 8368 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 20:53:59.075176 8368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 20:53:59.075210 8368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 20:53:59.076121 8368 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 20:53:59.076121 8368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 20:53:59.077110 8368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 20:53:59.077110 8368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 20:53:59.077110 8368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 20:53:59.080109 8368 out.go:252] - Booting up control plane ...
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 20:53:59.080109 8368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 20:53:59.081109 8368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 20:53:59.081109 8368 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 20:53:59.082108 8368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 20:53:59.082108 8368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 20:53:59.082108 8368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001080211s
I1227 20:53:59.082108 8368 kubeadm.go:319]
I1227 20:53:59.082108 8368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:53:59.082108 8368 kubeadm.go:319] - The kubelet is not running
I1227 20:53:59.082108 8368 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:53:59.082108 8368 kubeadm.go:319]
I1227 20:53:59.083110 8368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:53:59.083110 8368 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:53:59.083110 8368 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:53:59.083110 8368 kubeadm.go:319]
I1227 20:53:59.083110 8368 kubeadm.go:403] duration metric: took 8m4.4331849s to StartCluster
I1227 20:53:59.083110 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1227 20:53:59.086714 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 20:53:59.167941 8368 cri.go:96] found id: ""
I1227 20:53:59.167941 8368 logs.go:282] 0 containers: []
W1227 20:53:59.167941 8368 logs.go:284] No container was found matching "kube-apiserver"
I1227 20:53:59.167941 8368 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1227 20:53:59.171939 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 20:53:59.218478 8368 cri.go:96] found id: ""
I1227 20:53:59.218478 8368 logs.go:282] 0 containers: []
W1227 20:53:59.218478 8368 logs.go:284] No container was found matching "etcd"
I1227 20:53:59.218478 8368 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1227 20:53:59.226822 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 20:53:59.284237 8368 cri.go:96] found id: ""
I1227 20:53:59.284237 8368 logs.go:282] 0 containers: []
W1227 20:53:59.284237 8368 logs.go:284] No container was found matching "coredns"
I1227 20:53:59.284237 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1227 20:53:59.288231 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 20:53:59.377891 8368 cri.go:96] found id: ""
I1227 20:53:59.377891 8368 logs.go:282] 0 containers: []
W1227 20:53:59.377891 8368 logs.go:284] No container was found matching "kube-scheduler"
I1227 20:53:59.377891 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1227 20:53:59.382906 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 20:53:59.440858 8368 cri.go:96] found id: ""
I1227 20:53:59.440858 8368 logs.go:282] 0 containers: []
W1227 20:53:59.440858 8368 logs.go:284] No container was found matching "kube-proxy"
I1227 20:53:59.440858 8368 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1227 20:53:59.444864 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 20:53:59.494387 8368 cri.go:96] found id: ""
I1227 20:53:59.494387 8368 logs.go:282] 0 containers: []
W1227 20:53:59.494387 8368 logs.go:284] No container was found matching "kube-controller-manager"
I1227 20:53:59.494387 8368 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1227 20:53:59.499982 8368 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 20:53:59.549234 8368 cri.go:96] found id: ""
I1227 20:53:59.549234 8368 logs.go:282] 0 containers: []
W1227 20:53:59.549234 8368 logs.go:284] No container was found matching "kindnet"
I1227 20:53:59.549234 8368 logs.go:123] Gathering logs for kubelet ...
I1227 20:53:59.549234 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 20:53:59.622162 8368 logs.go:123] Gathering logs for dmesg ...
I1227 20:53:59.622162 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 20:53:59.659345 8368 logs.go:123] Gathering logs for describe nodes ...
I1227 20:53:59.659345 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 20:53:59.739538 8368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:53:59.729848 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.730979 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.731673 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.734528 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.735388 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 20:53:59.729848 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.730979 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.731673 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.734528 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:53:59.735388 10304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 20:53:59.739538 8368 logs.go:123] Gathering logs for Docker ...
I1227 20:53:59.739538 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1227 20:53:59.773539 8368 logs.go:123] Gathering logs for container status ...
I1227 20:53:59.773539 8368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1227 20:53:59.825106 8368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.825106 8368 out.go:285] *
W1227 20:53:59.825106 8368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.826106 8368 out.go:285] *
W1227 20:53:59.826106 8368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 20:53:59.831107 8368 out.go:203]
W1227 20:53:59.835110 8368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.153.1-microsoft-standard-WSL2[0m
[0;37mCONFIG_NAMESPACES[0m: [0;32menabled[0m
[0;37mCONFIG_NET_NS[0m: [0;32menabled[0m
[0;37mCONFIG_PID_NS[0m: [0;32menabled[0m
[0;37mCONFIG_IPC_NS[0m: [0;32menabled[0m
[0;37mCONFIG_UTS_NS[0m: [0;32menabled[0m
[0;37mCONFIG_CPUSETS[0m: [0;32menabled[0m
[0;37mCONFIG_MEMCG[0m: [0;32menabled[0m
[0;37mCONFIG_INET[0m: [0;32menabled[0m
[0;37mCONFIG_EXT4_FS[0m: [0;32menabled[0m
[0;37mCONFIG_PROC_FS[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_TARGET_REDIRECT[0m: [0;32menabled[0m
[0;37mCONFIG_NETFILTER_XT_MATCH_COMMENT[0m: [0;32menabled[0m
[0;37mCONFIG_FAIR_GROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUPS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_CPUACCT[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_DEVICE[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_FREEZER[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_PIDS[0m: [0;32menabled[0m
[0;37mCONFIG_CGROUP_SCHED[0m: [0;32menabled[0m
[0;37mCONFIG_OVERLAY_FS[0m: [0;32menabled[0m
[0;37mCONFIG_AUFS_FS[0m: [0;33mnot set - Required for aufs.[0m
[0;37mCONFIG_BLK_DEV_DM[0m: [0;32menabled[0m
[0;37mCONFIG_CFS_BANDWIDTH[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP[0m: [0;32menabled[0m
[0;37mCONFIG_SECCOMP_FILTER[0m: [0;32menabled[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001080211s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:53:59.835110 8368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 20:53:59.835110 8368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 20:53:59.838109 8368 out.go:203]
I1227 20:53:57.431785 13384 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.7108963s)
I1227 20:53:57.436790 13384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 20:53:57.465806 13384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 20:53:57.483772 13384 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 20:53:57.487780 13384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 20:53:57.502786 13384 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 20:53:57.502786 13384 kubeadm.go:158] found existing configuration files:
I1227 20:53:57.508785 13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 20:53:57.524777 13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 20:53:57.529788 13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 20:53:57.552791 13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 20:53:57.566779 13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 20:53:57.570784 13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 20:53:57.596900 13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 20:53:57.611895 13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 20:53:57.614903 13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 20:53:57.637059 13384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 20:53:57.651066 13384 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 20:53:57.655065 13384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 20:53:57.672063 13384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 20:53:57.753671 13384 kubeadm.go:319] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1227 20:53:57.763547 13384 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I1227 20:53:57.882353 13384 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:53:56.760720 13396 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 20:53:56.760720 13396 start.go:159] libmachine.API.Create for "cert-expiration-978000" (driver="docker")
I1227 20:53:56.760720 13396 client.go:173] LocalClient.Create starting
I1227 20:53:56.760720 13396 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
I1227 20:53:56.760720 13396 main.go:144] libmachine: Decoding PEM data...
I1227 20:53:56.760720 13396 main.go:144] libmachine: Parsing certificate...
I1227 20:53:56.761726 13396 main.go:144] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
I1227 20:53:56.761726 13396 main.go:144] libmachine: Decoding PEM data...
I1227 20:53:56.761726 13396 main.go:144] libmachine: Parsing certificate...
I1227 20:53:56.768104 13396 cli_runner.go:164] Run: docker network inspect cert-expiration-978000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:53:56.822175 13396 cli_runner.go:211] docker network inspect cert-expiration-978000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:53:56.826158 13396 network_create.go:284] running [docker network inspect cert-expiration-978000] to gather additional debugging logs...
I1227 20:53:56.826158 13396 cli_runner.go:164] Run: docker network inspect cert-expiration-978000
W1227 20:53:56.876161 13396 cli_runner.go:211] docker network inspect cert-expiration-978000 returned with exit code 1
I1227 20:53:56.876161 13396 network_create.go:287] error running [docker network inspect cert-expiration-978000]: docker network inspect cert-expiration-978000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network cert-expiration-978000 not found
I1227 20:53:56.876161 13396 network_create.go:289] output of [docker network inspect cert-expiration-978000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network cert-expiration-978000 not found
** /stderr **
I1227 20:53:56.880170 13396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:53:56.961330 13396 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:56.991940 13396 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:57.023945 13396 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:57.055537 13396 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:57.087241 13396 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:57.102500 13396 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e9dd0}
I1227 20:53:57.102500 13396 network_create.go:124] attempt to create docker network cert-expiration-978000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I1227 20:53:57.106790 13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000
W1227 20:53:57.164189 13396 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000 returned with exit code 1
W1227 20:53:57.165183 13396 network_create.go:149] failed to create docker network cert-expiration-978000 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000: exit status 1
stdout:
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1227 20:53:57.165183 13396 network_create.go:116] failed to create docker network cert-expiration-978000 192.168.94.0/24, will retry: subnet is taken
I1227 20:53:57.196468 13396 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1227 20:53:57.215696 13396 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001943bc0}
I1227 20:53:57.215696 13396 network_create.go:124] attempt to create docker network cert-expiration-978000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
I1227 20:53:57.220707 13396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-978000 cert-expiration-978000
I1227 20:53:57.385139 13396 network_create.go:108] docker network cert-expiration-978000 192.168.103.0/24 created
I1227 20:53:57.385139 13396 kic.go:121] calculated static IP "192.168.103.2" for the "cert-expiration-978000" container
I1227 20:53:57.395778 13396 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 20:53:57.462777 13396 cli_runner.go:164] Run: docker volume create cert-expiration-978000 --label name.minikube.sigs.k8s.io=cert-expiration-978000 --label created_by.minikube.sigs.k8s.io=true
I1227 20:53:57.533783 13396 oci.go:103] Successfully created a docker volume cert-expiration-978000
I1227 20:53:57.538777 13396 cli_runner.go:164] Run: docker run --rm --name cert-expiration-978000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-978000 --entrypoint /usr/bin/test -v cert-expiration-978000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 20:53:58.816595 13396 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-978000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-978000 --entrypoint /usr/bin/test -v cert-expiration-978000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.2778028s)
I1227 20:53:58.816595 13396 oci.go:107] Successfully prepared a docker volume cert-expiration-978000
I1227 20:53:58.816595 13396 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 20:53:58.816595 13396 kic.go:194] Starting extracting preloaded images to volume ...
I1227 20:53:58.820589 13396 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-978000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 20:53:57.154179 11092 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 20:53:57.237685 11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
I1227 20:53:57.290686 11092 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 20:53:57.290686 11092 kic_runner.go:114] Args: [docker exec --privileged cert-options-955700 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 20:53:57.409803 11092 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa...
I1227 20:53:59.731541 11092 cli_runner.go:164] Run: docker container inspect cert-options-955700 --format={{.State.Status}}
I1227 20:53:59.780538 11092 machine.go:94] provisionDockerMachine start ...
I1227 20:53:59.784612 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:53:59.840102 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:53:59.854107 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:53:59.854107 11092 main.go:144] libmachine: About to run SSH command:
hostname
I1227 20:54:00.023411 11092 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-options-955700
I1227 20:54:00.023411 11092 ubuntu.go:182] provisioning hostname "cert-options-955700"
I1227 20:54:00.029391 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:00.091411 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:54:00.091411 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:54:00.091411 11092 main.go:144] libmachine: About to run SSH command:
sudo hostname cert-options-955700 && echo "cert-options-955700" | sudo tee /etc/hostname
I1227 20:54:00.279737 11092 main.go:144] libmachine: SSH cmd err, output: <nil>: cert-options-955700
I1227 20:54:00.284748 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:00.344761 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:54:00.345740 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:54:00.345740 11092 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\scert-options-955700' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-955700/g' /etc/hosts;
else
echo '127.0.1.1 cert-options-955700' | sudo tee -a /etc/hosts;
fi
fi
I1227 20:54:00.512687 11092 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 20:54:00.512687 11092 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
I1227 20:54:00.512687 11092 ubuntu.go:190] setting up certificates
I1227 20:54:00.512687 11092 provision.go:84] configureAuth start
I1227 20:54:00.516698 11092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-955700
I1227 20:54:00.569690 11092 provision.go:143] copyHostCerts
I1227 20:54:00.569690 11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
I1227 20:54:00.569690 11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
I1227 20:54:00.569690 11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
I1227 20:54:00.570694 11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
I1227 20:54:00.570694 11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
I1227 20:54:00.570694 11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
I1227 20:54:00.571693 11092 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
I1227 20:54:00.571693 11092 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
I1227 20:54:00.571693 11092 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
I1227 20:54:00.572693 11092 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-options-955700 san=[127.0.0.1 192.168.76.2 cert-options-955700 localhost minikube]
I1227 20:54:00.624609 11092 provision.go:177] copyRemoteCerts
I1227 20:54:00.627601 11092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 20:54:00.630604 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:00.683603 11092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60668 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-955700\id_rsa Username:docker}
I1227 20:54:00.820632 11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
I1227 20:54:00.851623 11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1227 20:54:00.880280 11092 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 20:54:00.911864 11092 provision.go:87] duration metric: took 399.1374ms to configureAuth
I1227 20:54:00.911893 11092 ubuntu.go:206] setting minikube options for container-runtime
I1227 20:54:00.911893 11092 config.go:182] Loaded profile config "cert-options-955700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 20:54:00.915549 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:00.974111 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:54:00.974111 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:54:00.974111 11092 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 20:54:01.142159 11092 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1227 20:54:01.142159 11092 ubuntu.go:71] root file system type: overlay
I1227 20:54:01.142159 11092 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 20:54:01.146157 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:01.195187 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:54:01.196159 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:54:01.196159 11092 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 20:54:01.591866 11092 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 20:54:01.598481 11092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-955700
I1227 20:54:01.666085 11092 main.go:144] libmachine: Using SSH client type: native
I1227 20:54:01.666693 11092 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6d94ee200] 0x7ff6d94f0d60 <nil> [] 0s} 127.0.0.1 60668 <nil> <nil>}
I1227 20:54:01.666693 11092 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
==> Docker <==
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698481581Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698573888Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698585389Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698590790Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698595990Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698618692Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.698658595Z" level=info msg="Initializing buildkit"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.810183005Z" level=info msg="Completed buildkit initialization"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.815900682Z" level=info msg="Daemon has completed initialization"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816136302Z" level=info msg="API listen on /run/docker.sock"
Dec 27 20:45:51 force-systemd-flag-637800 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816189906Z" level=info msg="API listen on [::]:2376"
Dec 27 20:45:51 force-systemd-flag-637800 dockerd[1192]: time="2025-12-27T20:45:51.816191306Z" level=info msg="API listen on /var/run/docker.sock"
Dec 27 20:45:52 force-systemd-flag-637800 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Start docker client with request timeout 0s"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Loaded network plugin cni"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Docker cri networking managed by network plugin cni"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Setting cgroupDriver systemd"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Dec 27 20:45:52 force-systemd-flag-637800 cri-dockerd[1485]: time="2025-12-27T20:45:52Z" level=info msg="Start cri-dockerd grpc backend"
Dec 27 20:45:52 force-systemd-flag-637800 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:54:03.480384 10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:54:03.481367 10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:54:03.483796 10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:54:03.485469 10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:54:03.486293 10538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[ +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ +0.000000] FS: 0000000000000000 GS: 0000000000000000
[ +0.876144] CPU: 8 PID: 268388 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
[ +0.000003] RIP: 0033:0x7fbb60b22b20
[ +0.000007] Code: Unable to access opcode bytes at RIP 0x7fbb60b22af6.
[ +0.000001] RSP: 002b:00007fff28debba0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
[ +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ +0.000001] FS: 0000000000000000 GS: 0000000000000000
[ +10.748039] CPU: 13 PID: 270153 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
[ +0.000004] RIP: 0033:0x7fbd88e39b20
[ +0.000008] Code: Unable to access opcode bytes at RIP 0x7fbd88e39af6.
[ +0.000001] RSP: 002b:00007ffe9e2a8310 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
[ +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ +0.000001] FS: 0000000000000000 GS: 0000000000000000
[ +3.087400] tmpfs: Unknown parameter 'noswap'
==> kernel <==
20:54:03 up 1:09, 0 user, load average: 4.33, 3.98, 3.17
Linux force-systemd-flag-637800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:00 force-systemd-flag-637800 kubelet[10354]: E1227 20:54:00.842207 10354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:54:00 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:01 force-systemd-flag-637800 kubelet[10408]: E1227 20:54:01.600282 10408 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:54:01 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:02 force-systemd-flag-637800 kubelet[10424]: E1227 20:54:02.340808 10424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
Dec 27 20:54:02 force-systemd-flag-637800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:54:03 force-systemd-flag-637800 kubelet[10513]: E1227 20:54:03.080428 10513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:54:03 force-systemd-flag-637800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-637800 -n force-systemd-flag-637800
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p force-systemd-flag-637800 -n force-systemd-flag-637800: exit status 6 (580.676ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 20:54:04.375176 3668 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-637800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-637800" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-637800" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-windows-amd64.exe delete -p force-systemd-flag-637800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-637800: (4.4541792s)
--- FAIL: TestForceSystemdFlag (563.03s)