=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E1227 09:57:53.494715 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:59:10.017107 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.043587 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.049275 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.059550 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.079845 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.120341 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.200727 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.361213 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.681938 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:34.322243 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:35.602722 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:38.163550 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:43.284752 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:53.525768 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:06.962280 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:14.006496 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:54.966739 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:02:53.494777 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:03:16.887002 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:33.039157 550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m23.979438864s)
-- stdout --
* [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22343
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
-- /stdout --
** stderr **
I1227 09:57:23.854045 769388 out.go:360] Setting OutFile to fd 1 ...
I1227 09:57:23.854214 769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:57:23.854225 769388 out.go:374] Setting ErrFile to fd 2...
I1227 09:57:23.854241 769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:57:23.854500 769388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:57:23.854935 769388 out.go:368] Setting JSON to false
I1227 09:57:23.855775 769388 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16795,"bootTime":1766812649,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1227 09:57:23.855839 769388 start.go:143] virtualization:
I1227 09:57:23.860623 769388 out.go:179] * [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 09:57:23.864301 769388 out.go:179] - MINIKUBE_LOCATION=22343
I1227 09:57:23.864369 769388 notify.go:221] Checking for updates...
I1227 09:57:23.871858 769388 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 09:57:23.879831 769388 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
I1227 09:57:23.884111 769388 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
I1227 09:57:23.887027 769388 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 09:57:23.890016 769388 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 09:57:23.893523 769388 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:57:23.893679 769388 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 09:57:23.942486 769388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 09:57:23.942607 769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:57:24.033935 769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-27 09:57:24.020858019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:57:24.034041 769388 docker.go:319] overlay module found
I1227 09:57:24.037348 769388 out.go:179] * Using the docker driver based on user configuration
I1227 09:57:24.040109 769388 start.go:309] selected driver: docker
I1227 09:57:24.040131 769388 start.go:928] validating driver "docker" against <nil>
I1227 09:57:24.040145 769388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 09:57:24.040848 769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:57:24.119453 769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-27 09:57:24.103606726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:57:24.119606 769388 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 09:57:24.119820 769388 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 09:57:24.124043 769388 out.go:179] * Using Docker driver with root privileges
I1227 09:57:24.126916 769388 cni.go:84] Creating CNI manager for ""
I1227 09:57:24.126993 769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 09:57:24.127014 769388 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1227 09:57:24.127097 769388 start.go:353] cluster config:
{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:57:24.130340 769388 out.go:179] * Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
I1227 09:57:24.133152 769388 cache.go:134] Beginning downloading kic base image for docker with docker
I1227 09:57:24.136080 769388 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 09:57:24.140060 769388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 09:57:24.140141 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:24.140165 769388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1227 09:57:24.140177 769388 cache.go:65] Caching tarball of preloaded images
I1227 09:57:24.140256 769388 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 09:57:24.140271 769388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 09:57:24.140383 769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
I1227 09:57:24.140406 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json: {Name:mk4143ebcade308fb419077e3f8332f378dc7937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:24.161069 769388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 09:57:24.161091 769388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 09:57:24.161109 769388 cache.go:243] Successfully downloaded all kic artifacts
I1227 09:57:24.161140 769388 start.go:360] acquireMachinesLock for force-systemd-flag-574701: {Name:mkf48a67b67df727c9d74e45482507e00be21327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:57:24.161254 769388 start.go:364] duration metric: took 93.536µs to acquireMachinesLock for "force-systemd-flag-574701"
I1227 09:57:24.161290 769388 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 09:57:24.161353 769388 start.go:125] createHost starting for "" (driver="docker")
I1227 09:57:24.165884 769388 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 09:57:24.166208 769388 start.go:159] libmachine.API.Create for "force-systemd-flag-574701" (driver="docker")
I1227 09:57:24.166249 769388 client.go:173] LocalClient.Create starting
I1227 09:57:24.166322 769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
I1227 09:57:24.166357 769388 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:24.166372 769388 main.go:144] libmachine: Parsing certificate...
I1227 09:57:24.166421 769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
I1227 09:57:24.166486 769388 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:24.166501 769388 main.go:144] libmachine: Parsing certificate...
I1227 09:57:24.166999 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:57:24.184851 769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:57:24.184931 769388 network_create.go:284] running [docker network inspect force-systemd-flag-574701] to gather additional debugging logs...
I1227 09:57:24.184947 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701
W1227 09:57:24.201338 769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 returned with exit code 1
I1227 09:57:24.201367 769388 network_create.go:287] error running [docker network inspect force-systemd-flag-574701]: docker network inspect force-systemd-flag-574701: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-574701 not found
I1227 09:57:24.201381 769388 network_create.go:289] output of [docker network inspect force-systemd-flag-574701]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-574701 not found
** /stderr **
I1227 09:57:24.201475 769388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:24.231038 769388 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
I1227 09:57:24.231335 769388 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
I1227 09:57:24.231654 769388 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
I1227 09:57:24.232203 769388 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d880}
I1227 09:57:24.232227 769388 network_create.go:124] attempt to create docker network force-systemd-flag-574701 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1227 09:57:24.232294 769388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-574701 force-systemd-flag-574701
I1227 09:57:24.312633 769388 network_create.go:108] docker network force-systemd-flag-574701 192.168.76.0/24 created
I1227 09:57:24.312662 769388 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-574701" container
I1227 09:57:24.312733 769388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 09:57:24.330428 769388 cli_runner.go:164] Run: docker volume create force-systemd-flag-574701 --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true
I1227 09:57:24.354470 769388 oci.go:103] Successfully created a docker volume force-systemd-flag-574701
I1227 09:57:24.354571 769388 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-574701-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --entrypoint /usr/bin/test -v force-systemd-flag-574701:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 09:57:25.150777 769388 oci.go:107] Successfully prepared a docker volume force-systemd-flag-574701
I1227 09:57:25.150847 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:25.150858 769388 kic.go:194] Starting extracting preloaded images to volume ...
I1227 09:57:25.150937 769388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 09:57:29.285806 769388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134820012s)
I1227 09:57:29.285838 769388 kic.go:203] duration metric: took 4.134977669s to extract preloaded images to volume ...
W1227 09:57:29.285987 769388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 09:57:29.286133 769388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 09:57:29.373204 769388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-574701 --name force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-574701 --network force-systemd-flag-574701 --ip 192.168.76.2 --volume force-systemd-flag-574701:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 09:57:29.767688 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Running}}
I1227 09:57:29.794873 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:29.823050 769388 cli_runner.go:164] Run: docker exec force-systemd-flag-574701 stat /var/lib/dpkg/alternatives/iptables
I1227 09:57:29.890557 769388 oci.go:144] the created container "force-systemd-flag-574701" has a running status.
I1227 09:57:29.890594 769388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa...
I1227 09:57:30.464624 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 09:57:30.464726 769388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 09:57:30.506648 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:30.563495 769388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 09:57:30.563516 769388 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-574701 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 09:57:30.675307 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:30.705027 769388 machine.go:94] provisionDockerMachine start ...
I1227 09:57:30.705109 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:30.748542 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:30.748883 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:30.748899 769388 main.go:144] libmachine: About to run SSH command:
hostname
I1227 09:57:30.749537 769388 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1227 09:57:33.902589 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
I1227 09:57:33.902611 769388 ubuntu.go:182] provisioning hostname "force-systemd-flag-574701"
I1227 09:57:33.902682 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:33.920165 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:33.920469 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:33.920480 769388 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-574701 && echo "force-systemd-flag-574701" | sudo tee /etc/hostname
I1227 09:57:34.085277 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
I1227 09:57:34.085356 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.102383 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.102698 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.102716 769388 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-574701' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-574701/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-574701' | sudo tee -a /etc/hosts;
fi
fi
I1227 09:57:34.255031 769388 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 09:57:34.255059 769388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
I1227 09:57:34.255083 769388 ubuntu.go:190] setting up certificates
I1227 09:57:34.255093 769388 provision.go:84] configureAuth start
I1227 09:57:34.255175 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:34.271814 769388 provision.go:143] copyHostCerts
I1227 09:57:34.271855 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.271887 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
I1227 09:57:34.271900 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.271973 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
I1227 09:57:34.272067 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.272089 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
I1227 09:57:34.272097 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.272126 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
I1227 09:57:34.272178 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.272198 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
I1227 09:57:34.272205 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.272232 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
I1227 09:57:34.272293 769388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-574701 san=[127.0.0.1 192.168.76.2 force-systemd-flag-574701 localhost minikube]
I1227 09:57:34.545510 769388 provision.go:177] copyRemoteCerts
I1227 09:57:34.545576 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 09:57:34.545630 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.562287 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:34.663483 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 09:57:34.663552 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 09:57:34.681829 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 09:57:34.681902 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 09:57:34.701079 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 09:57:34.701139 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 09:57:34.722250 769388 provision.go:87] duration metric: took 467.13373ms to configureAuth
I1227 09:57:34.722280 769388 ubuntu.go:206] setting minikube options for container-runtime
I1227 09:57:34.722503 769388 config.go:182] Loaded profile config "force-systemd-flag-574701": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:57:34.722587 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.748482 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.748825 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.748842 769388 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 09:57:34.911917 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1227 09:57:34.911937 769388 ubuntu.go:71] root file system type: overlay
I1227 09:57:34.912090 769388 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 09:57:34.912153 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.931590 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.931909 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.931998 769388 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 09:57:35.094955 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 09:57:35.095071 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:35.115477 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:35.115820 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:35.115843 769388 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 09:57:36.313708 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-27 09:57:35.088526773 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1227 09:57:36.313732 769388 machine.go:97] duration metric: took 5.608683566s to provisionDockerMachine
I1227 09:57:36.313745 769388 client.go:176] duration metric: took 12.147489846s to LocalClient.Create
I1227 09:57:36.313757 769388 start.go:167] duration metric: took 12.14755212s to libmachine.API.Create "force-systemd-flag-574701"
I1227 09:57:36.313768 769388 start.go:293] postStartSetup for "force-systemd-flag-574701" (driver="docker")
I1227 09:57:36.313777 769388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 09:57:36.313843 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 09:57:36.313894 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.333968 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.436051 769388 ssh_runner.go:195] Run: cat /etc/os-release
I1227 09:57:36.439811 769388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 09:57:36.439837 769388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 09:57:36.439848 769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
I1227 09:57:36.439901 769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
I1227 09:57:36.439994 769388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
I1227 09:57:36.440010 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
I1227 09:57:36.440117 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 09:57:36.449353 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:36.472877 769388 start.go:296] duration metric: took 159.095049ms for postStartSetup
I1227 09:57:36.473245 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:36.490073 769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
I1227 09:57:36.490364 769388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 09:57:36.490419 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.508708 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.616568 769388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 09:57:36.622218 769388 start.go:128] duration metric: took 12.460850316s to createHost
I1227 09:57:36.622246 769388 start.go:83] releasing machines lock for "force-systemd-flag-574701", held for 12.460980323s
I1227 09:57:36.622323 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:36.641788 769388 ssh_runner.go:195] Run: cat /version.json
I1227 09:57:36.641849 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.642098 769388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 09:57:36.642163 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.664287 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.672747 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.780184 769388 ssh_runner.go:195] Run: systemctl --version
I1227 09:57:36.880930 769388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 09:57:36.887011 769388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 09:57:36.887080 769388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 09:57:36.924112 769388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 09:57:36.924139 769388 start.go:496] detecting cgroup driver to use...
I1227 09:57:36.924152 769388 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:36.924252 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:36.946873 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 09:57:36.956487 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 09:57:36.966480 769388 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 09:57:36.966545 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 09:57:36.977403 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:36.987483 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 09:57:36.998514 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:37.010694 769388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 09:57:37.022875 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 09:57:37.036011 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 09:57:37.044803 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 09:57:37.054260 769388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 09:57:37.063604 769388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 09:57:37.071796 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:37.216587 769388 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 09:57:37.323467 769388 start.go:496] detecting cgroup driver to use...
I1227 09:57:37.323492 769388 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:37.323546 769388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 09:57:37.352336 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.365635 769388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 09:57:37.402353 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.420004 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 09:57:37.441069 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:37.461000 769388 ssh_runner.go:195] Run: which cri-dockerd
I1227 09:57:37.468781 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 09:57:37.477924 769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 09:57:37.502109 769388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 09:57:37.672967 769388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 09:57:37.840323 769388 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 09:57:37.840416 769388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 09:57:37.872525 769388 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 09:57:37.886221 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:38.039548 769388 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 09:57:38.563380 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 09:57:38.577307 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 09:57:38.592258 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:38.608999 769388 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 09:57:38.783640 769388 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 09:57:38.955435 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.116493 769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 09:57:39.131867 769388 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 09:57:39.146438 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.292670 769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 09:57:39.371970 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:39.392203 769388 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 09:57:39.392325 769388 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 09:57:39.396824 769388 start.go:574] Will wait 60s for crictl version
I1227 09:57:39.396962 769388 ssh_runner.go:195] Run: which crictl
I1227 09:57:39.400890 769388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 09:57:39.425825 769388 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1227 09:57:39.425938 769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:39.452940 769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:39.487385 769388 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1227 09:57:39.487511 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:39.509398 769388 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1227 09:57:39.513521 769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.525777 769388 kubeadm.go:884] updating cluster {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 09:57:39.525889 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:39.525945 769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.550774 769388 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.550799 769388 docker.go:624] Images already preloaded, skipping extraction
I1227 09:57:39.550866 769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.574219 769388 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.574242 769388 cache_images.go:86] Images are preloaded, skipping loading
I1227 09:57:39.574252 769388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
I1227 09:57:39.574354 769388 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-574701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 09:57:39.574415 769388 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 09:57:39.642105 769388 cni.go:84] Creating CNI manager for ""
I1227 09:57:39.642130 769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 09:57:39.642146 769388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 09:57:39.642167 769388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-574701 NodeName:force-systemd-flag-574701 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 09:57:39.642292 769388 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-574701"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 09:57:39.642363 769388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 09:57:39.651846 769388 binaries.go:51] Found k8s binaries, skipping transfer
I1227 09:57:39.651910 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 09:57:39.661240 769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1227 09:57:39.677750 769388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 09:57:39.692714 769388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1227 09:57:39.705586 769388 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1227 09:57:39.709624 769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.719304 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.872388 769388 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:57:39.905933 769388 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701 for IP: 192.168.76.2
I1227 09:57:39.905958 769388 certs.go:195] generating shared ca certs ...
I1227 09:57:39.905975 769388 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:39.906194 769388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
I1227 09:57:39.906270 769388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
I1227 09:57:39.906284 769388 certs.go:257] generating profile certs ...
I1227 09:57:39.906359 769388 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key
I1227 09:57:39.906376 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt with IP's: []
I1227 09:57:40.185176 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt ...
I1227 09:57:40.185209 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt: {Name:mkd8df8f694ab6bd0be298ca10765d50a0840ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.185510 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key ...
I1227 09:57:40.185530 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key: {Name:mkedfb2c92eeb1c8634de35cfef29ff1eb8c71f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.185683 769388 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a
I1227 09:57:40.185706 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1227 09:57:40.780814 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a ...
I1227 09:57:40.780832 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a: {Name:mk220ae28824c87aa5d8ba64a794d883980a39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.780959 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a ...
I1227 09:57:40.780966 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a: {Name:mkac97d48f25e58d566aafd93cbcf157b2cb0117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.781034 769388 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt
I1227 09:57:40.781140 769388 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key
I1227 09:57:40.781206 769388 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key
I1227 09:57:40.781219 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt with IP's: []
I1227 09:57:40.864310 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt ...
I1227 09:57:40.864342 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt: {Name:mk5dc7c59c3dfc68c7c8e2186f25c0bda8c48900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.864549 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key ...
I1227 09:57:40.864569 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key: {Name:mk7098be4d9c15bf1f3c8453e90bcc9388cdc9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.864678 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 09:57:40.864715 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 09:57:40.864736 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 09:57:40.864755 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 09:57:40.864768 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 09:57:40.864796 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 09:57:40.864821 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 09:57:40.864837 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 09:57:40.864913 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
W1227 09:57:40.864990 769388 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
I1227 09:57:40.865007 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
I1227 09:57:40.865038 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
I1227 09:57:40.865102 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
I1227 09:57:40.865134 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
I1227 09:57:40.865199 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:40.865244 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:40.865267 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
I1227 09:57:40.865282 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
I1227 09:57:40.865799 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 09:57:40.898569 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 09:57:40.927873 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 09:57:40.948313 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 09:57:40.969255 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 09:57:40.989875 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 09:57:41.010787 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 09:57:41.031724 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 09:57:41.051433 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 09:57:41.077779 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
I1227 09:57:41.108786 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
I1227 09:57:41.133210 769388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 09:57:41.147828 769388 ssh_runner.go:195] Run: openssl version
I1227 09:57:41.154460 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.161904 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
I1227 09:57:41.169300 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.173499 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.173602 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.219730 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.227914 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.234863 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.242037 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 09:57:41.252122 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.256231 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.256330 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.303396 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 09:57:41.311657 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 09:57:41.319645 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
I1227 09:57:41.327015 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
I1227 09:57:41.334332 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
I1227 09:57:41.338256 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
I1227 09:57:41.338360 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
I1227 09:57:41.382878 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 09:57:41.390786 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
I1227 09:57:41.399024 769388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 09:57:41.403779 769388 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 09:57:41.403832 769388 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:57:41.403946 769388 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 09:57:41.429145 769388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 09:57:41.439644 769388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 09:57:41.448769 769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 09:57:41.448834 769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 09:57:41.460465 769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 09:57:41.460481 769388 kubeadm.go:158] found existing configuration files:
I1227 09:57:41.460550 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 09:57:41.471042 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 09:57:41.471103 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 09:57:41.480178 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 09:57:41.490398 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 09:57:41.490464 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 09:57:41.499105 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 09:57:41.510257 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 09:57:41.510321 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 09:57:41.520923 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 09:57:41.534256 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 09:57:41.534333 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 09:57:41.542461 769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 09:57:41.646824 769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 09:57:41.648335 769388 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 09:57:41.753889 769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 09:57:41.754015 769388 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 09:57:41.754079 769388 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 09:57:41.754162 769388 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 09:57:41.754242 769388 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 09:57:41.754318 769388 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 09:57:41.754400 769388 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 09:57:41.754479 769388 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 09:57:41.754553 769388 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 09:57:41.754656 769388 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 09:57:41.754726 769388 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 09:57:41.754805 769388 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 09:57:41.836243 769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 09:57:41.836443 769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 09:57:41.836586 769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 09:57:41.855494 769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 09:57:41.860963 769388 out.go:252] - Generating certificates and keys ...
I1227 09:57:41.861090 769388 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 09:57:41.861187 769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 09:57:42.027134 769388 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 09:57:42.183308 769388 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 09:57:42.275495 769388 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 09:57:42.538151 769388 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 09:57:42.689457 769388 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 09:57:42.690078 769388 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:57:42.729913 769388 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 09:57:42.730516 769388 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:57:42.981667 769388 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 09:57:43.099131 769388 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 09:57:43.810479 769388 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 09:57:43.811011 769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 09:57:44.109743 769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 09:57:44.315485 769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 09:57:44.540089 769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 09:57:44.694926 769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 09:57:45.077270 769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 09:57:45.080386 769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 09:57:45.089864 769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 09:57:45.093574 769388 out.go:252] - Booting up control plane ...
I1227 09:57:45.095563 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 09:57:45.097773 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 09:57:45.099785 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 09:57:45.145757 769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 09:57:45.145889 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 09:57:45.157698 769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 09:57:45.158555 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 09:57:45.158619 769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 09:57:45.405440 769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 09:57:45.405562 769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:01:45.399682 769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001476405s
I1227 10:01:45.399725 769388 kubeadm.go:319]
I1227 10:01:45.399789 769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:01:45.399827 769388 kubeadm.go:319] - The kubelet is not running
I1227 10:01:45.399942 769388 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:01:45.399950 769388 kubeadm.go:319]
I1227 10:01:45.400064 769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:01:45.400098 769388 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:01:45.400133 769388 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:01:45.400138 769388 kubeadm.go:319]
I1227 10:01:45.404789 769388 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:01:45.405218 769388 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:01:45.405332 769388 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:01:45.405567 769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:01:45.405577 769388 kubeadm.go:319]
I1227 10:01:45.405646 769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 10:01:45.405800 769388 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001476405s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001476405s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1227 10:01:45.405885 769388 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1227 10:01:45.831088 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 10:01:45.845534 769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:01:45.845599 769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:01:45.853400 769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:01:45.853418 769388 kubeadm.go:158] found existing configuration files:
I1227 10:01:45.853490 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:01:45.862159 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:01:45.862225 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:01:45.869960 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:01:45.877918 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:01:45.877988 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:01:45.885657 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:01:45.893024 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:01:45.893088 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:01:45.900643 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:01:45.908132 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:01:45.908198 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:01:45.915813 769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:01:45.955846 769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:01:45.955910 769388 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:01:46.044287 769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:01:46.044366 769388 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:01:46.044408 769388 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:01:46.044460 769388 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:01:46.044514 769388 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:01:46.044563 769388 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:01:46.044621 769388 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:01:46.044672 769388 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:01:46.044726 769388 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:01:46.044780 769388 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:01:46.044831 769388 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:01:46.044883 769388 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:01:46.122322 769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:01:46.122522 769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:01:46.122662 769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:01:46.135379 769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:01:46.139129 769388 out.go:252] - Generating certificates and keys ...
I1227 10:01:46.139327 769388 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:01:46.139450 769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:01:46.139598 769388 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 10:01:46.139674 769388 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 10:01:46.139756 769388 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 10:01:46.139815 769388 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 10:01:46.139883 769388 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 10:01:46.139949 769388 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 10:01:46.140059 769388 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 10:01:46.140138 769388 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 10:01:46.140469 769388 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 10:01:46.140529 769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:01:46.278774 769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:01:46.467106 769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:01:46.674089 769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:01:46.962090 769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:01:47.089511 769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:01:47.090121 769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:01:47.094363 769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:01:47.097843 769388 out.go:252] - Booting up control plane ...
I1227 10:01:47.097949 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:01:47.099592 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:01:47.099673 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:01:47.133940 769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:01:47.134045 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:01:47.147908 769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:01:47.148976 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:01:47.149327 769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:01:47.321604 769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:01:47.321718 769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:05:47.321648 769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305874s
I1227 10:05:47.321690 769388 kubeadm.go:319]
I1227 10:05:47.321762 769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:05:47.321802 769388 kubeadm.go:319] - The kubelet is not running
I1227 10:05:47.321944 769388 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:05:47.321958 769388 kubeadm.go:319]
I1227 10:05:47.322066 769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:05:47.322103 769388 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:05:47.322153 769388 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:05:47.322165 769388 kubeadm.go:319]
I1227 10:05:47.325886 769388 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:05:47.326310 769388 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:05:47.326424 769388 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:05:47.326663 769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:05:47.326673 769388 kubeadm.go:319]
I1227 10:05:47.326742 769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 10:05:47.326828 769388 kubeadm.go:403] duration metric: took 8m5.922999378s to StartCluster
I1227 10:05:47.326868 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1227 10:05:47.326939 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 10:05:47.362142 769388 cri.go:96] found id: ""
I1227 10:05:47.362184 769388 logs.go:282] 0 containers: []
W1227 10:05:47.362193 769388 logs.go:284] No container was found matching "kube-apiserver"
I1227 10:05:47.362200 769388 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1227 10:05:47.362260 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 10:05:47.386992 769388 cri.go:96] found id: ""
I1227 10:05:47.387017 769388 logs.go:282] 0 containers: []
W1227 10:05:47.387026 769388 logs.go:284] No container was found matching "etcd"
I1227 10:05:47.387033 769388 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1227 10:05:47.387095 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 10:05:47.412506 769388 cri.go:96] found id: ""
I1227 10:05:47.412532 769388 logs.go:282] 0 containers: []
W1227 10:05:47.412541 769388 logs.go:284] No container was found matching "coredns"
I1227 10:05:47.412549 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1227 10:05:47.412607 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 10:05:47.440415 769388 cri.go:96] found id: ""
I1227 10:05:47.440440 769388 logs.go:282] 0 containers: []
W1227 10:05:47.440449 769388 logs.go:284] No container was found matching "kube-scheduler"
I1227 10:05:47.440456 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1227 10:05:47.440515 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 10:05:47.465494 769388 cri.go:96] found id: ""
I1227 10:05:47.465522 769388 logs.go:282] 0 containers: []
W1227 10:05:47.465530 769388 logs.go:284] No container was found matching "kube-proxy"
I1227 10:05:47.465538 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1227 10:05:47.465601 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 10:05:47.494595 769388 cri.go:96] found id: ""
I1227 10:05:47.494628 769388 logs.go:282] 0 containers: []
W1227 10:05:47.494638 769388 logs.go:284] No container was found matching "kube-controller-manager"
I1227 10:05:47.494645 769388 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1227 10:05:47.494716 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 10:05:47.523703 769388 cri.go:96] found id: ""
I1227 10:05:47.523728 769388 logs.go:282] 0 containers: []
W1227 10:05:47.523736 769388 logs.go:284] No container was found matching "kindnet"
I1227 10:05:47.523746 769388 logs.go:123] Gathering logs for Docker ...
I1227 10:05:47.523757 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1227 10:05:47.546298 769388 logs.go:123] Gathering logs for container status ...
I1227 10:05:47.546329 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 10:05:47.584884 769388 logs.go:123] Gathering logs for kubelet ...
I1227 10:05:47.584959 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 10:05:47.653574 769388 logs.go:123] Gathering logs for dmesg ...
I1227 10:05:47.653612 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 10:05:47.671978 769388 logs.go:123] Gathering logs for describe nodes ...
I1227 10:05:47.672006 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 10:05:47.737784 769388 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:05:47.729462 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.730146 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.731816 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.732344 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.733957 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 10:05:47.729462 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.730146 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.731816 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.732344 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.733957 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1227 10:05:47.737860 769388 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.737902 769388 out.go:285] *
*
W1227 10:05:47.737955 769388 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.737974 769388 out.go:285] *
*
W1227 10:05:47.738225 769388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 10:05:47.743845 769388 out.go:203]
W1227 10:05:47.746703 769388 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.746744 769388 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 10:05:47.746767 769388 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 10:05:47.749808 769388 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-574701 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:05:48.201467771 +0000 UTC m=+2830.367898401
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-574701
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-574701:
-- stdout --
[
{
"Id": "acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361",
"Created": "2025-12-27T09:57:29.403390828Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 769964,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-27T09:57:29.482246999Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
"ResolvConfPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/hostname",
"HostsPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/hosts",
"LogPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361-json.log",
"Name": "/force-systemd-flag-574701",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-574701:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-574701",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361",
"LowerDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec-init/diff:/var/lib/docker/overlay2/9b533b4deb9c1d535741c7522fe23eacc0fb251795d87993eb74f4ff9ff9e74e/diff",
"MergedDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/merged",
"UpperDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/diff",
"WorkDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-574701",
"Source": "/var/lib/docker/volumes/force-systemd-flag-574701/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-574701",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-574701",
"name.minikube.sigs.k8s.io": "force-systemd-flag-574701",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "8d87e9a2d21908ab80189916324051f8bc9d66c1dfafe0c47016cc5e1cb3446a",
"SandboxKey": "/var/run/docker/netns/8d87e9a2d219",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33723"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33724"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33727"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33725"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33726"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-574701": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "7a:03:d3:06:6a:81",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "d5bc7f6c9a07c2381c87e9dc3b31039111859cad13b96f3981123438fcc35f62",
"EndpointID": "5b85b8045fe8a53821c4c468f5cf0eaeb629c07482c241bb95efc2716476875a",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-574701",
"acba4de42c5d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-574701 -n force-systemd-flag-574701
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-574701 -n force-systemd-flag-574701: exit status 6 (311.524486ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 10:05:48.516963 781664 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-574701" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-574701 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ -p cilium-334346 sudo cat /etc/kubernetes/kubelet.conf │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /var/lib/kubelet/config.yaml │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ delete │ -p offline-docker-663445 │ offline-docker-663445 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
│ ssh │ -p cilium-334346 sudo systemctl status docker --all --full --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl cat docker --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /etc/docker/daemon.json │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo docker system info │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl status cri-docker --all --full --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl cat cri-docker --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cri-dockerd --version │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl status containerd --all --full --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl cat containerd --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /lib/systemd/system/containerd.service │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo cat /etc/containerd/config.toml │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo containerd config dump │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl status crio --all --full --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo systemctl cat crio --no-pager │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ -p cilium-334346 sudo crio config │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ delete │ -p cilium-334346 │ cilium-334346 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
│ start │ -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-env-159617 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ start │ -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ │
│ ssh │ force-systemd-flag-574701 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/27 09:57:23
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 09:57:23.854045 769388 out.go:360] Setting OutFile to fd 1 ...
I1227 09:57:23.854214 769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:57:23.854225 769388 out.go:374] Setting ErrFile to fd 2...
I1227 09:57:23.854241 769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:57:23.854500 769388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:57:23.854935 769388 out.go:368] Setting JSON to false
I1227 09:57:23.855775 769388 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16795,"bootTime":1766812649,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1227 09:57:23.855839 769388 start.go:143] virtualization:
I1227 09:57:23.860623 769388 out.go:179] * [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 09:57:23.864301 769388 out.go:179] - MINIKUBE_LOCATION=22343
I1227 09:57:23.864369 769388 notify.go:221] Checking for updates...
I1227 09:57:23.871858 769388 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 09:57:23.879831 769388 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
I1227 09:57:23.884111 769388 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
I1227 09:57:23.887027 769388 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 09:57:23.890016 769388 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 09:57:23.893523 769388 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:57:23.893679 769388 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 09:57:23.942486 769388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 09:57:23.942607 769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:57:24.033935 769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-27 09:57:24.020858019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:57:24.034041 769388 docker.go:319] overlay module found
I1227 09:57:24.037348 769388 out.go:179] * Using the docker driver based on user configuration
I1227 09:57:24.040109 769388 start.go:309] selected driver: docker
I1227 09:57:24.040131 769388 start.go:928] validating driver "docker" against <nil>
I1227 09:57:24.040145 769388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 09:57:24.040848 769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:57:24.119453 769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-27 09:57:24.103606726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:57:24.119606 769388 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 09:57:24.119820 769388 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 09:57:24.124043 769388 out.go:179] * Using Docker driver with root privileges
I1227 09:57:24.126916 769388 cni.go:84] Creating CNI manager for ""
I1227 09:57:24.126993 769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 09:57:24.127014 769388 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1227 09:57:24.127097 769388 start.go:353] cluster config:
{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:57:24.130340 769388 out.go:179] * Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
I1227 09:57:24.133152 769388 cache.go:134] Beginning downloading kic base image for docker with docker
I1227 09:57:24.136080 769388 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 09:57:24.140060 769388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 09:57:24.140141 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:24.140165 769388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1227 09:57:24.140177 769388 cache.go:65] Caching tarball of preloaded images
I1227 09:57:24.140256 769388 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 09:57:24.140271 769388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1227 09:57:24.140383 769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
I1227 09:57:24.140406 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json: {Name:mk4143ebcade308fb419077e3f8332f378dc7937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:24.161069 769388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 09:57:24.161091 769388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 09:57:24.161109 769388 cache.go:243] Successfully downloaded all kic artifacts
I1227 09:57:24.161140 769388 start.go:360] acquireMachinesLock for force-systemd-flag-574701: {Name:mkf48a67b67df727c9d74e45482507e00be21327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:57:24.161254 769388 start.go:364] duration metric: took 93.536µs to acquireMachinesLock for "force-systemd-flag-574701"
I1227 09:57:24.161290 769388 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1227 09:57:24.161353 769388 start.go:125] createHost starting for "" (driver="docker")
I1227 09:57:23.421132 769090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 09:57:23.421440 769090 start.go:159] libmachine.API.Create for "force-systemd-env-159617" (driver="docker")
I1227 09:57:23.421474 769090 client.go:173] LocalClient.Create starting
I1227 09:57:23.421564 769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
I1227 09:57:23.421635 769090 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:23.421681 769090 main.go:144] libmachine: Parsing certificate...
I1227 09:57:23.421760 769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
I1227 09:57:23.421803 769090 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:23.421839 769090 main.go:144] libmachine: Parsing certificate...
I1227 09:57:23.422293 769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:57:23.444615 769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:57:23.444701 769090 network_create.go:284] running [docker network inspect force-systemd-env-159617] to gather additional debugging logs...
I1227 09:57:23.444722 769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617
W1227 09:57:23.469730 769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 returned with exit code 1
I1227 09:57:23.469759 769090 network_create.go:287] error running [docker network inspect force-systemd-env-159617]: docker network inspect force-systemd-env-159617: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-env-159617 not found
I1227 09:57:23.469771 769090 network_create.go:289] output of [docker network inspect force-systemd-env-159617]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-env-159617 not found
** /stderr **
I1227 09:57:23.469879 769090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:23.484995 769090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
I1227 09:57:23.485264 769090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
I1227 09:57:23.485535 769090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
I1227 09:57:23.485842 769090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74a76dba2194 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:01:b7:05:f7:b5} reservation:<nil>}
I1227 09:57:23.486201 769090 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47360}
I1227 09:57:23.486220 769090 network_create.go:124] attempt to create docker network force-systemd-env-159617 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1227 09:57:23.486272 769090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159617 force-systemd-env-159617
I1227 09:57:23.588843 769090 network_create.go:108] docker network force-systemd-env-159617 192.168.85.0/24 created
I1227 09:57:23.588880 769090 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159617" container
I1227 09:57:23.588951 769090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 09:57:23.607164 769090 cli_runner.go:164] Run: docker volume create force-systemd-env-159617 --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true
I1227 09:57:23.627044 769090 oci.go:103] Successfully created a docker volume force-systemd-env-159617
I1227 09:57:23.627271 769090 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159617-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --entrypoint /usr/bin/test -v force-systemd-env-159617:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 09:57:24.208049 769090 oci.go:107] Successfully prepared a docker volume force-systemd-env-159617
I1227 09:57:24.208115 769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:24.208125 769090 kic.go:194] Starting extracting preloaded images to volume ...
I1227 09:57:24.208197 769090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 09:57:24.165884 769388 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 09:57:24.166208 769388 start.go:159] libmachine.API.Create for "force-systemd-flag-574701" (driver="docker")
I1227 09:57:24.166249 769388 client.go:173] LocalClient.Create starting
I1227 09:57:24.166322 769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
I1227 09:57:24.166357 769388 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:24.166372 769388 main.go:144] libmachine: Parsing certificate...
I1227 09:57:24.166421 769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
I1227 09:57:24.166486 769388 main.go:144] libmachine: Decoding PEM data...
I1227 09:57:24.166501 769388 main.go:144] libmachine: Parsing certificate...
I1227 09:57:24.166999 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:57:24.184851 769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:57:24.184931 769388 network_create.go:284] running [docker network inspect force-systemd-flag-574701] to gather additional debugging logs...
I1227 09:57:24.184947 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701
W1227 09:57:24.201338 769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 returned with exit code 1
I1227 09:57:24.201367 769388 network_create.go:287] error running [docker network inspect force-systemd-flag-574701]: docker network inspect force-systemd-flag-574701: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-574701 not found
I1227 09:57:24.201381 769388 network_create.go:289] output of [docker network inspect force-systemd-flag-574701]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-574701 not found
** /stderr **
I1227 09:57:24.201475 769388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:24.231038 769388 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
I1227 09:57:24.231335 769388 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
I1227 09:57:24.231654 769388 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
I1227 09:57:24.232203 769388 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d880}
I1227 09:57:24.232227 769388 network_create.go:124] attempt to create docker network force-systemd-flag-574701 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1227 09:57:24.232294 769388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-574701 force-systemd-flag-574701
I1227 09:57:24.312633 769388 network_create.go:108] docker network force-systemd-flag-574701 192.168.76.0/24 created
I1227 09:57:24.312662 769388 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-574701" container
I1227 09:57:24.312733 769388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 09:57:24.330428 769388 cli_runner.go:164] Run: docker volume create force-systemd-flag-574701 --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true
I1227 09:57:24.354470 769388 oci.go:103] Successfully created a docker volume force-systemd-flag-574701
I1227 09:57:24.354571 769388 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-574701-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --entrypoint /usr/bin/test -v force-systemd-flag-574701:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 09:57:25.150777 769388 oci.go:107] Successfully prepared a docker volume force-systemd-flag-574701
I1227 09:57:25.150847 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:25.150858 769388 kic.go:194] Starting extracting preloaded images to volume ...
I1227 09:57:25.150937 769388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 09:57:29.290594 769090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (5.082338598s)
I1227 09:57:29.290643 769090 kic.go:203] duration metric: took 5.082509768s to extract preloaded images to volume ...
W1227 09:57:29.290794 769090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 09:57:29.290951 769090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 09:57:29.395948 769090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159617 --name force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159617 --network force-systemd-env-159617 --ip 192.168.85.2 --volume force-systemd-env-159617:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 09:57:29.916266 769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Running}}
I1227 09:57:29.946688 769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
I1227 09:57:29.995989 769090 cli_runner.go:164] Run: docker exec force-systemd-env-159617 stat /var/lib/dpkg/alternatives/iptables
I1227 09:57:30.096142 769090 oci.go:144] the created container "force-systemd-env-159617" has a running status.
I1227 09:57:30.096178 769090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa...
I1227 09:57:30.500317 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 09:57:30.500877 769090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 09:57:30.556340 769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
I1227 09:57:30.597973 769090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 09:57:30.597993 769090 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159617 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 09:57:30.707985 769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
I1227 09:57:30.755347 769090 machine.go:94] provisionDockerMachine start ...
I1227 09:57:30.755426 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:30.787678 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:30.788014 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:30.788023 769090 main.go:144] libmachine: About to run SSH command:
hostname
I1227 09:57:30.789480 769090 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40286->127.0.0.1:33728: read: connection reset by peer
I1227 09:57:29.285806 769388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134820012s)
I1227 09:57:29.285838 769388 kic.go:203] duration metric: took 4.134977669s to extract preloaded images to volume ...
W1227 09:57:29.285987 769388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 09:57:29.286133 769388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 09:57:29.373204 769388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-574701 --name force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-574701 --network force-systemd-flag-574701 --ip 192.168.76.2 --volume force-systemd-flag-574701:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 09:57:29.767688 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Running}}
I1227 09:57:29.794873 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:29.823050 769388 cli_runner.go:164] Run: docker exec force-systemd-flag-574701 stat /var/lib/dpkg/alternatives/iptables
I1227 09:57:29.890557 769388 oci.go:144] the created container "force-systemd-flag-574701" has a running status.
I1227 09:57:29.890594 769388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa...
I1227 09:57:30.464624 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 09:57:30.464726 769388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 09:57:30.506648 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:30.563495 769388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 09:57:30.563516 769388 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-574701 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 09:57:30.675307 769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
I1227 09:57:30.705027 769388 machine.go:94] provisionDockerMachine start ...
I1227 09:57:30.705109 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:30.748542 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:30.748883 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:30.748899 769388 main.go:144] libmachine: About to run SSH command:
hostname
I1227 09:57:30.749537 769388 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1227 09:57:33.935423 769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
I1227 09:57:33.935449 769090 ubuntu.go:182] provisioning hostname "force-systemd-env-159617"
I1227 09:57:33.935561 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:33.958892 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:33.959223 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:33.959235 769090 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-env-159617 && echo "force-systemd-env-159617" | sudo tee /etc/hostname
I1227 09:57:34.119941 769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
I1227 09:57:34.120013 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:34.142778 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.143089 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:34.143106 769090 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-env-159617' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159617/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-env-159617' | sudo tee -a /etc/hosts;
fi
fi
I1227 09:57:34.287061 769090 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 09:57:34.287083 769090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
I1227 09:57:34.287101 769090 ubuntu.go:190] setting up certificates
I1227 09:57:34.287154 769090 provision.go:84] configureAuth start
I1227 09:57:34.287222 769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
I1227 09:57:34.331489 769090 provision.go:143] copyHostCerts
I1227 09:57:34.331534 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.331572 769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
I1227 09:57:34.331590 769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.331648 769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
I1227 09:57:34.331728 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.331749 769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
I1227 09:57:34.331757 769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.331779 769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
I1227 09:57:34.331821 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.331841 769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
I1227 09:57:34.331846 769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.331869 769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
I1227 09:57:34.331917 769090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159617 san=[127.0.0.1 192.168.85.2 force-systemd-env-159617 localhost minikube]
I1227 09:57:34.598391 769090 provision.go:177] copyRemoteCerts
I1227 09:57:34.598509 769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 09:57:34.598589 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:34.616730 769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
I1227 09:57:34.716531 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 09:57:34.716639 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 09:57:34.746980 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 09:57:34.747057 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 09:57:34.766043 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 09:57:34.766100 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 09:57:34.785469 769090 provision.go:87] duration metric: took 498.291074ms to configureAuth
I1227 09:57:34.785494 769090 ubuntu.go:206] setting minikube options for container-runtime
I1227 09:57:34.785662 769090 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:57:34.785721 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:34.802871 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.803337 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:34.803351 769090 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 09:57:34.967701 769090 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1227 09:57:34.967720 769090 ubuntu.go:71] root file system type: overlay
I1227 09:57:34.967841 769090 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 09:57:34.967907 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:34.988654 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.988961 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:34.989046 769090 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 09:57:35.153832 769090 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 09:57:35.153922 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:35.181379 769090 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:35.181695 769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33728 <nil> <nil>}
I1227 09:57:35.181712 769090 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 09:57:36.406595 769090 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-27 09:57:35.148525118 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1227 09:57:36.406630 769090 machine.go:97] duration metric: took 5.651265169s to provisionDockerMachine
I1227 09:57:36.406643 769090 client.go:176] duration metric: took 12.985158917s to LocalClient.Create
I1227 09:57:36.406661 769090 start.go:167] duration metric: took 12.98522367s to libmachine.API.Create "force-systemd-env-159617"
I1227 09:57:36.406668 769090 start.go:293] postStartSetup for "force-systemd-env-159617" (driver="docker")
I1227 09:57:36.406681 769090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 09:57:36.406740 769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 09:57:36.406784 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:36.424421 769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
I1227 09:57:36.529164 769090 ssh_runner.go:195] Run: cat /etc/os-release
I1227 09:57:36.534359 769090 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 09:57:36.534393 769090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 09:57:36.534406 769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
I1227 09:57:36.534457 769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
I1227 09:57:36.534546 769090 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
I1227 09:57:36.534559 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
I1227 09:57:36.534656 769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 09:57:36.545176 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:36.564519 769090 start.go:296] duration metric: took 157.818194ms for postStartSetup
I1227 09:57:36.564872 769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
I1227 09:57:36.582964 769090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json ...
I1227 09:57:36.583262 769090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 09:57:36.583316 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:36.603598 769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
I1227 09:57:36.705489 769090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 09:57:36.712003 769090 start.go:128] duration metric: took 13.295769122s to createHost
I1227 09:57:36.712030 769090 start.go:83] releasing machines lock for "force-systemd-env-159617", held for 13.295895493s
I1227 09:57:36.712104 769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
I1227 09:57:36.735458 769090 ssh_runner.go:195] Run: cat /version.json
I1227 09:57:36.735509 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:36.735527 769090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 09:57:36.735606 769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
I1227 09:57:36.763793 769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
I1227 09:57:36.767335 769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
I1227 09:57:36.874762 769090 ssh_runner.go:195] Run: systemctl --version
I1227 09:57:36.974322 769090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 09:57:36.981372 769090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 09:57:36.981442 769090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 09:57:37.027684 769090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 09:57:37.027787 769090 start.go:496] detecting cgroup driver to use...
I1227 09:57:37.027825 769090 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:37.028014 769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:37.048308 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 09:57:37.060423 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 09:57:37.072092 769090 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 09:57:37.072150 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 09:57:37.082000 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:37.091287 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 09:57:37.099834 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:37.120427 769090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 09:57:37.128839 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 09:57:37.139785 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 09:57:37.156006 769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 09:57:37.167227 769090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 09:57:37.176858 769090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 09:57:37.188913 769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:37.345099 769090 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 09:57:37.452805 769090 start.go:496] detecting cgroup driver to use...
I1227 09:57:37.452846 769090 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:37.452907 769090 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 09:57:37.474525 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.495905 769090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 09:57:37.546927 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.567236 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 09:57:37.591088 769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:37.608681 769090 ssh_runner.go:195] Run: which cri-dockerd
I1227 09:57:37.613473 769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 09:57:37.622987 769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 09:57:37.639261 769090 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 09:57:37.803450 769090 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 09:57:37.985157 769090 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 09:57:37.985302 769090 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 09:57:38.001357 769090 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 09:57:38.018865 769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:33.902589 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
I1227 09:57:33.902611 769388 ubuntu.go:182] provisioning hostname "force-systemd-flag-574701"
I1227 09:57:33.902682 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:33.920165 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:33.920469 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:33.920480 769388 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-574701 && echo "force-systemd-flag-574701" | sudo tee /etc/hostname
I1227 09:57:34.085277 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
I1227 09:57:34.085356 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.102383 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.102698 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.102716 769388 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-574701' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-574701/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-574701' | sudo tee -a /etc/hosts;
fi
fi
I1227 09:57:34.255031 769388 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 09:57:34.255059 769388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
I1227 09:57:34.255083 769388 ubuntu.go:190] setting up certificates
I1227 09:57:34.255093 769388 provision.go:84] configureAuth start
I1227 09:57:34.255175 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:34.271814 769388 provision.go:143] copyHostCerts
I1227 09:57:34.271855 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.271887 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
I1227 09:57:34.271900 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
I1227 09:57:34.271973 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
I1227 09:57:34.272067 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.272089 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
I1227 09:57:34.272097 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
I1227 09:57:34.272126 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
I1227 09:57:34.272178 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.272198 769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
I1227 09:57:34.272205 769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
I1227 09:57:34.272232 769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
I1227 09:57:34.272293 769388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-574701 san=[127.0.0.1 192.168.76.2 force-systemd-flag-574701 localhost minikube]
I1227 09:57:34.545510 769388 provision.go:177] copyRemoteCerts
I1227 09:57:34.545576 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 09:57:34.545630 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.562287 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:34.663483 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 09:57:34.663552 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 09:57:34.681829 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 09:57:34.681902 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 09:57:34.701079 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 09:57:34.701139 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 09:57:34.722250 769388 provision.go:87] duration metric: took 467.13373ms to configureAuth
I1227 09:57:34.722280 769388 ubuntu.go:206] setting minikube options for container-runtime
I1227 09:57:34.722503 769388 config.go:182] Loaded profile config "force-systemd-flag-574701": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:57:34.722587 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.748482 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.748825 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.748842 769388 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1227 09:57:34.911917 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1227 09:57:34.911937 769388 ubuntu.go:71] root file system type: overlay
I1227 09:57:34.912090 769388 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1227 09:57:34.912153 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:34.931590 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:34.931909 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:34.931998 769388 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1227 09:57:35.094955 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1227 09:57:35.095071 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:35.115477 769388 main.go:144] libmachine: Using SSH client type: native
I1227 09:57:35.115820 769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33723 <nil> <nil>}
I1227 09:57:35.115843 769388 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1227 09:57:36.313708 769388 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-27 09:57:35.088526773 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1227 09:57:36.313732 769388 machine.go:97] duration metric: took 5.608683566s to provisionDockerMachine
I1227 09:57:36.313745 769388 client.go:176] duration metric: took 12.147489846s to LocalClient.Create
I1227 09:57:36.313757 769388 start.go:167] duration metric: took 12.14755212s to libmachine.API.Create "force-systemd-flag-574701"
I1227 09:57:36.313768 769388 start.go:293] postStartSetup for "force-systemd-flag-574701" (driver="docker")
I1227 09:57:36.313777 769388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 09:57:36.313843 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 09:57:36.313894 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.333968 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.436051 769388 ssh_runner.go:195] Run: cat /etc/os-release
I1227 09:57:36.439811 769388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 09:57:36.439837 769388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 09:57:36.439848 769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
I1227 09:57:36.439901 769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
I1227 09:57:36.439994 769388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
I1227 09:57:36.440010 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
I1227 09:57:36.440117 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 09:57:36.449353 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:36.472877 769388 start.go:296] duration metric: took 159.095049ms for postStartSetup
I1227 09:57:36.473245 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:36.490073 769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
I1227 09:57:36.490364 769388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 09:57:36.490419 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.508708 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.616568 769388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 09:57:36.622218 769388 start.go:128] duration metric: took 12.460850316s to createHost
I1227 09:57:36.622246 769388 start.go:83] releasing machines lock for "force-systemd-flag-574701", held for 12.460980323s
I1227 09:57:36.622323 769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
I1227 09:57:36.641788 769388 ssh_runner.go:195] Run: cat /version.json
I1227 09:57:36.641849 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.642098 769388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 09:57:36.642163 769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
I1227 09:57:36.664287 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.672747 769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
I1227 09:57:36.780184 769388 ssh_runner.go:195] Run: systemctl --version
I1227 09:57:36.880930 769388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 09:57:36.887011 769388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 09:57:36.887080 769388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 09:57:36.924112 769388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 09:57:36.924139 769388 start.go:496] detecting cgroup driver to use...
I1227 09:57:36.924152 769388 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:36.924252 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:36.946873 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 09:57:36.956487 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 09:57:36.966480 769388 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 09:57:36.966545 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 09:57:36.977403 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:36.987483 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 09:57:36.998514 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:57:37.010694 769388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 09:57:37.022875 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 09:57:37.036011 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 09:57:37.044803 769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 09:57:37.054260 769388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 09:57:37.063604 769388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 09:57:37.071796 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:37.216587 769388 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 09:57:37.323467 769388 start.go:496] detecting cgroup driver to use...
I1227 09:57:37.323492 769388 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:57:37.323546 769388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1227 09:57:37.352336 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.365635 769388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1227 09:57:37.402353 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1227 09:57:37.420004 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 09:57:37.441069 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:57:37.461000 769388 ssh_runner.go:195] Run: which cri-dockerd
I1227 09:57:37.468781 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1227 09:57:37.477924 769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1227 09:57:37.502109 769388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1227 09:57:37.672967 769388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1227 09:57:37.840323 769388 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1227 09:57:37.840416 769388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1227 09:57:37.872525 769388 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1227 09:57:37.886221 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:38.039548 769388 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 09:57:38.563380 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 09:57:38.577307 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 09:57:38.592258 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:38.608999 769388 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 09:57:38.783640 769388 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 09:57:38.955435 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.116493 769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 09:57:39.131867 769388 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 09:57:39.146438 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.292670 769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 09:57:39.371970 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:39.392203 769388 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 09:57:39.392325 769388 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 09:57:39.396824 769388 start.go:574] Will wait 60s for crictl version
I1227 09:57:39.396962 769388 ssh_runner.go:195] Run: which crictl
I1227 09:57:39.400890 769388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 09:57:39.425825 769388 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1227 09:57:39.425938 769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:39.452940 769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:38.182967 769090 ssh_runner.go:195] Run: sudo systemctl restart docker
I1227 09:57:38.643595 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 09:57:38.659567 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1227 09:57:38.676415 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:38.693157 769090 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1227 09:57:38.864384 769090 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1227 09:57:39.021630 769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.162919 769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1227 09:57:39.195686 769090 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1227 09:57:39.211669 769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.365125 769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1227 09:57:39.465622 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1227 09:57:39.482004 769090 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1227 09:57:39.482130 769090 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1227 09:57:39.486220 769090 start.go:574] Will wait 60s for crictl version
I1227 09:57:39.486340 769090 ssh_runner.go:195] Run: which crictl
I1227 09:57:39.491356 769090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 09:57:39.522612 769090 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1227 09:57:39.522673 769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:39.553580 769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1227 09:57:39.589853 769090 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1227 09:57:39.589955 769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:39.609607 769090 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1227 09:57:39.613910 769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.623309 769090 kubeadm.go:884] updating cluster {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 09:57:39.623458 769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:39.623516 769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.644906 769090 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.644931 769090 docker.go:624] Images already preloaded, skipping extraction
I1227 09:57:39.644988 769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.664959 769090 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.664988 769090 cache_images.go:86] Images are preloaded, skipping loading
I1227 09:57:39.664998 769090 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
I1227 09:57:39.665088 769090 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-159617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 09:57:39.665158 769090 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 09:57:39.747517 769090 cni.go:84] Creating CNI manager for ""
I1227 09:57:39.747540 769090 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 09:57:39.747563 769090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 09:57:39.747608 769090 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159617 NodeName:force-systemd-env-159617 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 09:57:39.747762 769090 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-env-159617"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 09:57:39.747834 769090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 09:57:39.760575 769090 binaries.go:51] Found k8s binaries, skipping transfer
I1227 09:57:39.760648 769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 09:57:39.775516 769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I1227 09:57:39.797752 769090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 09:57:39.810219 769090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
I1227 09:57:39.828590 769090 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1227 09:57:39.832469 769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.842381 769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:40.061511 769090 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:57:40.082736 769090 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617 for IP: 192.168.85.2
I1227 09:57:40.082833 769090 certs.go:195] generating shared ca certs ...
I1227 09:57:40.082870 769090 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.083102 769090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
I1227 09:57:40.083211 769090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
I1227 09:57:40.083245 769090 certs.go:257] generating profile certs ...
I1227 09:57:40.083338 769090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key
I1227 09:57:40.083381 769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt with IP's: []
I1227 09:57:40.290500 769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt ...
I1227 09:57:40.290601 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt: {Name:mkdef657d92ac442b8ca8d24bafb061317e911bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.290877 769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key ...
I1227 09:57:40.290927 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key: {Name:mkd98e7a2fa2573ec393c9c33ed2af8ef854cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.291097 769090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17
I1227 09:57:40.291156 769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1227 09:57:40.441193 769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 ...
I1227 09:57:40.441292 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17: {Name:mka639a3de484b92be9c260344df9e8bdedff2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.441538 769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 ...
I1227 09:57:40.441579 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17: {Name:mkdfe6ab9be254d46412de6c107cb553d654d1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.441720 769090 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt
I1227 09:57:40.441858 769090 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key
I1227 09:57:40.441988 769090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key
I1227 09:57:40.442045 769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt with IP's: []
I1227 09:57:40.780289 769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt ...
I1227 09:57:40.780323 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt: {Name:mk8f859572961556f4c1a1a4febed8df29d82f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.780533 769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key ...
I1227 09:57:40.780542 769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key: {Name:mk7056050a32483ae445b0ae07006f0562cf0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.780640 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 09:57:40.780659 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 09:57:40.780678 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 09:57:40.780691 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 09:57:40.780705 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 09:57:40.780722 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 09:57:40.780742 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 09:57:40.780754 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 09:57:40.780817 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
W1227 09:57:40.780867 769090 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
I1227 09:57:40.780876 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
I1227 09:57:40.780908 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
I1227 09:57:40.780938 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
I1227 09:57:40.780966 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
I1227 09:57:40.781023 769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:40.781067 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
I1227 09:57:40.781079 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:40.781090 769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
I1227 09:57:40.781688 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 09:57:40.814042 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 09:57:40.838435 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 09:57:40.880890 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 09:57:40.906281 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 09:57:40.928048 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1227 09:57:40.950863 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 09:57:40.973554 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 09:57:40.993400 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
I1227 09:57:41.017107 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 09:57:41.037355 769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
I1227 09:57:41.066525 769090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 09:57:41.095696 769090 ssh_runner.go:195] Run: openssl version
I1227 09:57:41.107307 769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
I1227 09:57:41.118732 769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
I1227 09:57:41.132658 769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
I1227 09:57:41.138503 769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
I1227 09:57:41.138605 769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
I1227 09:57:41.185800 769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 09:57:41.193790 769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
I1227 09:57:41.201492 769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.208841 769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
I1227 09:57:41.216427 769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.220469 769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.220555 769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.265817 769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.273569 769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.281083 769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.288616 769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 09:57:41.296277 769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.300012 769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.300113 769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.343100 769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 09:57:41.351309 769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 09:57:41.358883 769090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 09:57:41.362914 769090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 09:57:41.362973 769090 kubeadm.go:401] StartCluster: {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:57:41.363101 769090 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 09:57:41.381051 769090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 09:57:41.392106 769090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 09:57:41.400552 769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 09:57:41.400659 769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 09:57:41.412462 769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 09:57:41.412533 769090 kubeadm.go:158] found existing configuration files:
I1227 09:57:41.412612 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 09:57:41.421832 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 09:57:41.421945 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 09:57:41.432909 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 09:57:41.443013 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 09:57:41.443076 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 09:57:41.451990 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 09:57:41.462018 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 09:57:41.462083 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 09:57:41.470161 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 09:57:41.479985 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 09:57:41.480066 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 09:57:41.488640 769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 09:57:41.541967 769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 09:57:41.544237 769090 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 09:57:41.651990 769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 09:57:41.652128 769090 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 09:57:41.652184 769090 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 09:57:41.652254 769090 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 09:57:41.652330 769090 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 09:57:41.652403 769090 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 09:57:41.652481 769090 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 09:57:41.652557 769090 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 09:57:41.652636 769090 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 09:57:41.652713 769090 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 09:57:41.652790 769090 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 09:57:41.652862 769090 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 09:57:41.748451 769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 09:57:41.748635 769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 09:57:41.748758 769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 09:57:41.778942 769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 09:57:39.487385 769388 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1227 09:57:39.487511 769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:57:39.509398 769388 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1227 09:57:39.513521 769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.525777 769388 kubeadm.go:884] updating cluster {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 09:57:39.525889 769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:57:39.525945 769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.550774 769388 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.550799 769388 docker.go:624] Images already preloaded, skipping extraction
I1227 09:57:39.550866 769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1227 09:57:39.574219 769388 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1227 09:57:39.574242 769388 cache_images.go:86] Images are preloaded, skipping loading
I1227 09:57:39.574252 769388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
I1227 09:57:39.574354 769388 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-574701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 09:57:39.574415 769388 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1227 09:57:39.642105 769388 cni.go:84] Creating CNI manager for ""
I1227 09:57:39.642130 769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1227 09:57:39.642146 769388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 09:57:39.642167 769388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-574701 NodeName:force-systemd-flag-574701 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 09:57:39.642292 769388 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-574701"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 09:57:39.642363 769388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 09:57:39.651846 769388 binaries.go:51] Found k8s binaries, skipping transfer
I1227 09:57:39.651910 769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 09:57:39.661240 769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1227 09:57:39.677750 769388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 09:57:39.692714 769388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1227 09:57:39.705586 769388 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1227 09:57:39.709624 769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:57:39.719304 769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:57:39.872388 769388 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:57:39.905933 769388 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701 for IP: 192.168.76.2
I1227 09:57:39.905958 769388 certs.go:195] generating shared ca certs ...
I1227 09:57:39.905975 769388 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:39.906194 769388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
I1227 09:57:39.906270 769388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
I1227 09:57:39.906284 769388 certs.go:257] generating profile certs ...
I1227 09:57:39.906359 769388 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key
I1227 09:57:39.906376 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt with IP's: []
I1227 09:57:40.185176 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt ...
I1227 09:57:40.185209 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt: {Name:mkd8df8f694ab6bd0be298ca10765d50a0840ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.185510 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key ...
I1227 09:57:40.185530 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key: {Name:mkedfb2c92eeb1c8634de35cfef29ff1eb8c71f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.185683 769388 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a
I1227 09:57:40.185706 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1227 09:57:40.780814 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a ...
I1227 09:57:40.780832 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a: {Name:mk220ae28824c87aa5d8ba64a794d883980a39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.780959 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a ...
I1227 09:57:40.780966 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a: {Name:mkac97d48f25e58d566aafd93cbcf157b2cb0117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.781034 769388 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt
I1227 09:57:40.781140 769388 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key
I1227 09:57:40.781206 769388 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key
I1227 09:57:40.781219 769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt with IP's: []
I1227 09:57:40.864310 769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt ...
I1227 09:57:40.864342 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt: {Name:mk5dc7c59c3dfc68c7c8e2186f25c0bda8c48900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.864549 769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key ...
I1227 09:57:40.864569 769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key: {Name:mk7098be4d9c15bf1f3c8453e90bcc9388cdc9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:57:40.864678 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 09:57:40.864715 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 09:57:40.864736 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 09:57:40.864755 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 09:57:40.864768 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 09:57:40.864796 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 09:57:40.864821 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 09:57:40.864837 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 09:57:40.864913 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
W1227 09:57:40.864990 769388 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
I1227 09:57:40.865007 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
I1227 09:57:40.865038 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
I1227 09:57:40.865102 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
I1227 09:57:40.865134 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
I1227 09:57:40.865199 769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
I1227 09:57:40.865244 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:40.865267 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
I1227 09:57:40.865282 769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
I1227 09:57:40.865799 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 09:57:40.898569 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1227 09:57:40.927873 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 09:57:40.948313 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 09:57:40.969255 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 09:57:40.989875 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 09:57:41.010787 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 09:57:41.031724 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 09:57:41.051433 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 09:57:41.077779 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
I1227 09:57:41.108786 769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
I1227 09:57:41.133210 769388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 09:57:41.147828 769388 ssh_runner.go:195] Run: openssl version
I1227 09:57:41.154460 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.161904 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
I1227 09:57:41.169300 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.173499 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.173602 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
I1227 09:57:41.219730 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.227914 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
I1227 09:57:41.234863 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.242037 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 09:57:41.252122 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.256231 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.256330 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 09:57:41.303396 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 09:57:41.311657 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 09:57:41.319645 769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
I1227 09:57:41.327015 769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
I1227 09:57:41.334332 769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
I1227 09:57:41.338256 769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
I1227 09:57:41.338360 769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
I1227 09:57:41.382878 769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 09:57:41.390786 769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
I1227 09:57:41.399024 769388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 09:57:41.403779 769388 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 09:57:41.403832 769388 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:57:41.403946 769388 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1227 09:57:41.429145 769388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 09:57:41.439644 769388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 09:57:41.448769 769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 09:57:41.448834 769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 09:57:41.460465 769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 09:57:41.460481 769388 kubeadm.go:158] found existing configuration files:
I1227 09:57:41.460550 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 09:57:41.471042 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 09:57:41.471103 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 09:57:41.480178 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 09:57:41.490398 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 09:57:41.490464 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 09:57:41.499105 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 09:57:41.510257 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 09:57:41.510321 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 09:57:41.520923 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 09:57:41.534256 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 09:57:41.534333 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 09:57:41.542461 769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 09:57:41.646824 769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 09:57:41.648335 769388 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 09:57:41.753889 769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 09:57:41.754015 769388 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 09:57:41.754079 769388 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 09:57:41.754162 769388 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 09:57:41.754242 769388 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 09:57:41.754318 769388 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 09:57:41.754400 769388 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 09:57:41.754479 769388 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 09:57:41.754553 769388 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 09:57:41.754656 769388 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 09:57:41.754726 769388 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 09:57:41.754805 769388 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 09:57:41.836243 769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 09:57:41.836443 769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 09:57:41.836586 769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 09:57:41.855494 769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 09:57:41.785794 769090 out.go:252] - Generating certificates and keys ...
I1227 09:57:41.785959 769090 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 09:57:41.786069 769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 09:57:42.111543 769090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 09:57:42.252770 769090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 09:57:42.503417 769090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 09:57:42.668993 769090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 09:57:43.021398 769090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 09:57:43.021831 769090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 09:57:41.860963 769388 out.go:252] - Generating certificates and keys ...
I1227 09:57:41.861090 769388 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 09:57:41.861187 769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 09:57:42.027134 769388 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 09:57:42.183308 769388 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 09:57:42.275495 769388 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 09:57:42.538151 769388 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 09:57:42.689457 769388 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 09:57:42.690078 769388 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:57:42.729913 769388 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 09:57:42.730516 769388 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:57:42.981667 769388 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 09:57:43.099131 769388 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 09:57:43.810479 769388 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 09:57:43.811011 769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 09:57:44.109743 769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 09:57:44.315485 769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 09:57:44.540089 769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 09:57:44.694926 769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 09:57:45.077270 769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 09:57:45.080386 769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 09:57:45.089864 769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 09:57:43.563328 769090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 09:57:43.564051 769090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1227 09:57:43.973250 769090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 09:57:44.693761 769090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 09:57:44.975792 769090 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 09:57:44.976216 769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 09:57:45.527516 769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 09:57:45.744663 769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 09:57:45.991918 769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 09:57:46.189187 769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 09:57:46.428467 769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 09:57:46.429216 769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 09:57:46.432110 769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 09:57:46.435922 769090 out.go:252] - Booting up control plane ...
I1227 09:57:46.436040 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 09:57:46.436157 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 09:57:46.436262 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 09:57:46.453052 769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 09:57:46.453445 769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 09:57:46.460773 769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 09:57:46.461104 769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 09:57:46.461150 769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 09:57:46.595002 769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 09:57:46.595169 769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 09:57:45.093574 769388 out.go:252] - Booting up control plane ...
I1227 09:57:45.095563 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 09:57:45.097773 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 09:57:45.099785 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 09:57:45.145757 769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 09:57:45.145889 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 09:57:45.157698 769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 09:57:45.158555 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 09:57:45.158619 769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 09:57:45.405440 769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 09:57:45.405562 769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:01:45.399682 769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001476405s
I1227 10:01:45.399725 769388 kubeadm.go:319]
I1227 10:01:45.399789 769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:01:45.399827 769388 kubeadm.go:319] - The kubelet is not running
I1227 10:01:45.399942 769388 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:01:45.399950 769388 kubeadm.go:319]
I1227 10:01:45.400064 769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:01:45.400098 769388 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:01:45.400133 769388 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:01:45.400138 769388 kubeadm.go:319]
I1227 10:01:45.404789 769388 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:01:45.405218 769388 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:01:45.405332 769388 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:01:45.405567 769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:01:45.405577 769388 kubeadm.go:319]
I1227 10:01:45.405646 769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 10:01:45.405800 769388 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001476405s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1227 10:01:45.405885 769388 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1227 10:01:45.831088 769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 10:01:45.845534 769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:01:45.845599 769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:01:45.853400 769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:01:45.853418 769388 kubeadm.go:158] found existing configuration files:
I1227 10:01:45.853490 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:01:45.862159 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:01:45.862225 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:01:45.869960 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:01:45.877918 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:01:45.877988 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:01:45.885657 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:01:45.893024 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:01:45.893088 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:01:45.900643 769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:01:45.908132 769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:01:45.908198 769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:01:45.915813 769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:01:45.955846 769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:01:45.955910 769388 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:01:46.044287 769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:01:46.044366 769388 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:01:46.044408 769388 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:01:46.044460 769388 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:01:46.044514 769388 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:01:46.044563 769388 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:01:46.044621 769388 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:01:46.044672 769388 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:01:46.044726 769388 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:01:46.044780 769388 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:01:46.044831 769388 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:01:46.044883 769388 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:01:46.122322 769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:01:46.122522 769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:01:46.122662 769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:01:46.135379 769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:01:46.139129 769388 out.go:252] - Generating certificates and keys ...
I1227 10:01:46.139327 769388 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:01:46.139450 769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:01:46.139598 769388 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 10:01:46.139674 769388 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 10:01:46.139756 769388 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 10:01:46.139815 769388 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 10:01:46.139883 769388 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 10:01:46.139949 769388 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 10:01:46.140059 769388 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 10:01:46.140138 769388 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 10:01:46.140469 769388 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 10:01:46.140529 769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:01:46.278774 769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:01:46.467106 769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:01:46.674089 769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:01:46.962090 769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:01:47.089511 769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:01:47.090121 769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:01:47.094363 769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:01:46.594891 769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000241154s
I1227 10:01:46.594938 769090 kubeadm.go:319]
I1227 10:01:46.595000 769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:01:46.595036 769090 kubeadm.go:319] - The kubelet is not running
I1227 10:01:46.595163 769090 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:01:46.595173 769090 kubeadm.go:319]
I1227 10:01:46.595286 769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:01:46.595323 769090 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:01:46.595357 769090 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:01:46.595361 769090 kubeadm.go:319]
I1227 10:01:46.600352 769090 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:01:46.600807 769090 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:01:46.600916 769090 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:01:46.601157 769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:01:46.601163 769090 kubeadm.go:319]
I1227 10:01:46.601232 769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 10:01:46.601345 769090 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000241154s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1227 10:01:46.601418 769090 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1227 10:01:47.049789 769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 10:01:47.065686 769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 10:01:47.065751 769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 10:01:47.078067 769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 10:01:47.078144 769090 kubeadm.go:158] found existing configuration files:
I1227 10:01:47.078247 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 10:01:47.088920 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 10:01:47.089035 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 10:01:47.101290 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 10:01:47.111719 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 10:01:47.111783 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 10:01:47.119486 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 10:01:47.128720 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 10:01:47.128889 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 10:01:47.137979 769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 10:01:47.146623 769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 10:01:47.146781 769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 10:01:47.155774 769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 10:01:47.197997 769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 10:01:47.198575 769090 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 10:01:47.334679 769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 10:01:47.334774 769090 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 10:01:47.334814 769090 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 10:01:47.334877 769090 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 10:01:47.334937 769090 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 10:01:47.335000 769090 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 10:01:47.335065 769090 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 10:01:47.335164 769090 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 10:01:47.335236 769090 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 10:01:47.335294 769090 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 10:01:47.335359 769090 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 10:01:47.335418 769090 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 10:01:47.413630 769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 10:01:47.413746 769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 10:01:47.413842 769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 10:01:47.427809 769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 10:01:47.431698 769090 out.go:252] - Generating certificates and keys ...
I1227 10:01:47.431881 769090 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 10:01:47.431951 769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 10:01:47.432047 769090 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 10:01:47.432114 769090 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 10:01:47.432211 769090 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 10:01:47.432286 769090 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 10:01:47.432360 769090 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 10:01:47.432432 769090 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 10:01:47.432512 769090 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 10:01:47.432810 769090 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 10:01:47.433140 769090 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 10:01:47.433248 769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 10:01:47.584725 769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 10:01:47.986204 769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 10:01:47.097843 769388 out.go:252] - Booting up control plane ...
I1227 10:01:47.097949 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:01:47.099592 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:01:47.099673 769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:01:47.133940 769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:01:47.134045 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:01:47.147908 769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:01:47.148976 769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:01:47.149327 769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:01:47.321604 769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:01:47.321718 769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:01:48.231719 769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 10:01:48.868258 769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 10:01:49.097361 769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 10:01:49.097857 769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 10:01:49.100455 769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 10:01:49.104347 769090 out.go:252] - Booting up control plane ...
I1227 10:01:49.104456 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 10:01:49.104539 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 10:01:49.105527 769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 10:01:49.125548 769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 10:01:49.125672 769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 10:01:49.134446 769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 10:01:49.134626 769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 10:01:49.134694 769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 10:01:49.262884 769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 10:01:49.263010 769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 10:05:47.321648 769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305874s
I1227 10:05:47.321690 769388 kubeadm.go:319]
I1227 10:05:47.321762 769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 10:05:47.321802 769388 kubeadm.go:319] - The kubelet is not running
I1227 10:05:47.321944 769388 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 10:05:47.321958 769388 kubeadm.go:319]
I1227 10:05:47.322066 769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 10:05:47.322103 769388 kubeadm.go:319] - 'systemctl status kubelet'
I1227 10:05:47.322153 769388 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 10:05:47.322165 769388 kubeadm.go:319]
I1227 10:05:47.325886 769388 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 10:05:47.326310 769388 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 10:05:47.326424 769388 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 10:05:47.326663 769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 10:05:47.326673 769388 kubeadm.go:319]
I1227 10:05:47.326742 769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 10:05:47.326828 769388 kubeadm.go:403] duration metric: took 8m5.922999378s to StartCluster
I1227 10:05:47.326868 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1227 10:05:47.326939 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 10:05:47.362142 769388 cri.go:96] found id: ""
I1227 10:05:47.362184 769388 logs.go:282] 0 containers: []
W1227 10:05:47.362193 769388 logs.go:284] No container was found matching "kube-apiserver"
I1227 10:05:47.362200 769388 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1227 10:05:47.362260 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 10:05:47.386992 769388 cri.go:96] found id: ""
I1227 10:05:47.387017 769388 logs.go:282] 0 containers: []
W1227 10:05:47.387026 769388 logs.go:284] No container was found matching "etcd"
I1227 10:05:47.387033 769388 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1227 10:05:47.387095 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 10:05:47.412506 769388 cri.go:96] found id: ""
I1227 10:05:47.412532 769388 logs.go:282] 0 containers: []
W1227 10:05:47.412541 769388 logs.go:284] No container was found matching "coredns"
I1227 10:05:47.412549 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1227 10:05:47.412607 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 10:05:47.440415 769388 cri.go:96] found id: ""
I1227 10:05:47.440440 769388 logs.go:282] 0 containers: []
W1227 10:05:47.440449 769388 logs.go:284] No container was found matching "kube-scheduler"
I1227 10:05:47.440456 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1227 10:05:47.440515 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 10:05:47.465494 769388 cri.go:96] found id: ""
I1227 10:05:47.465522 769388 logs.go:282] 0 containers: []
W1227 10:05:47.465530 769388 logs.go:284] No container was found matching "kube-proxy"
I1227 10:05:47.465538 769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1227 10:05:47.465601 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 10:05:47.494595 769388 cri.go:96] found id: ""
I1227 10:05:47.494628 769388 logs.go:282] 0 containers: []
W1227 10:05:47.494638 769388 logs.go:284] No container was found matching "kube-controller-manager"
I1227 10:05:47.494645 769388 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1227 10:05:47.494716 769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 10:05:47.523703 769388 cri.go:96] found id: ""
I1227 10:05:47.523728 769388 logs.go:282] 0 containers: []
W1227 10:05:47.523736 769388 logs.go:284] No container was found matching "kindnet"
I1227 10:05:47.523746 769388 logs.go:123] Gathering logs for Docker ...
I1227 10:05:47.523757 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1227 10:05:47.546298 769388 logs.go:123] Gathering logs for container status ...
I1227 10:05:47.546329 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 10:05:47.584884 769388 logs.go:123] Gathering logs for kubelet ...
I1227 10:05:47.584959 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 10:05:47.653574 769388 logs.go:123] Gathering logs for dmesg ...
I1227 10:05:47.653612 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 10:05:47.671978 769388 logs.go:123] Gathering logs for describe nodes ...
I1227 10:05:47.672006 769388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 10:05:47.737784 769388 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:05:47.729462 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.730146 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.731816 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.732344 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.733957 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 10:05:47.729462 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.730146 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.731816 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.732344 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:47.733957 5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1227 10:05:47.737860 769388 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.737902 769388 out.go:285] *
W1227 10:05:47.737955 769388 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.737974 769388 out.go:285] *
W1227 10:05:47.738225 769388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 10:05:47.743845 769388 out.go:203]
W1227 10:05:47.746703 769388 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000305874s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 10:05:47.746744 769388 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 10:05:47.746767 769388 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 10:05:47.749808 769388 out.go:203]
==> Docker <==
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.267209749Z" level=info msg="Restoring containers: start."
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.287569154Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.307561743Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.512818399Z" level=info msg="Loading containers: done."
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531516903Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531579162Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531613729Z" level=info msg="Initializing buildkit"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.552331803Z" level=info msg="Completed buildkit initialization"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.558443093Z" level=info msg="Daemon has completed initialization"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.558651391Z" level=info msg="API listen on /var/run/docker.sock"
Dec 27 09:57:38 force-systemd-flag-574701 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.559486059Z" level=info msg="API listen on /run/docker.sock"
Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.559558500Z" level=info msg="API listen on [::]:2376"
Dec 27 09:57:39 force-systemd-flag-574701 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start docker client with request timeout 0s"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Loaded network plugin cni"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Setting cgroupDriver systemd"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start cri-dockerd grpc backend"
Dec 27 09:57:39 force-systemd-flag-574701 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 10:05:49.121495 5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:49.122187 5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:49.123651 5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:49.124083 5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 10:05:49.125486 5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[ +0.131052] systemd-journald[229]: Failed to send stream file descriptor to service manager: Connection refused
[Dec27 08:52] overlayfs: idmapped layers are currently not supported
[Dec27 08:53] overlayfs: idmapped layers are currently not supported
[Dec27 08:55] overlayfs: idmapped layers are currently not supported
[Dec27 08:56] overlayfs: idmapped layers are currently not supported
[Dec27 09:02] overlayfs: idmapped layers are currently not supported
[Dec27 09:03] overlayfs: idmapped layers are currently not supported
[Dec27 09:04] overlayfs: idmapped layers are currently not supported
[Dec27 09:05] overlayfs: idmapped layers are currently not supported
[Dec27 09:06] overlayfs: idmapped layers are currently not supported
[Dec27 09:08] overlayfs: idmapped layers are currently not supported
[ +24.018537] overlayfs: idmapped layers are currently not supported
[Dec27 09:09] overlayfs: idmapped layers are currently not supported
[ +25.285275] overlayfs: idmapped layers are currently not supported
[Dec27 09:10] overlayfs: idmapped layers are currently not supported
[ +21.268238] systemd-journald[230]: Failed to send stream file descriptor to service manager: Connection refused
[Dec27 09:11] overlayfs: idmapped layers are currently not supported
[ +4.417156] overlayfs: idmapped layers are currently not supported
[ +35.863671] overlayfs: idmapped layers are currently not supported
[Dec27 09:12] overlayfs: idmapped layers are currently not supported
[Dec27 09:13] overlayfs: idmapped layers are currently not supported
[Dec27 09:14] overlayfs: idmapped layers are currently not supported
[ +22.811829] overlayfs: idmapped layers are currently not supported
[Dec27 09:16] overlayfs: idmapped layers are currently not supported
[Dec27 09:18] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
10:05:49 up 4:48, 0 user, load average: 1.09, 0.98, 1.70
Linux force-systemd-flag-574701 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:46 force-systemd-flag-574701 kubelet[5406]: E1227 10:05:46.846947 5406 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:47 force-systemd-flag-574701 kubelet[5479]: E1227 10:05:47.610804 5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:48 force-systemd-flag-574701 kubelet[5534]: E1227 10:05:48.387256 5534 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 10:05:49 force-systemd-flag-574701 kubelet[5624]: E1227 10:05:49.107929 5624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-574701 -n force-systemd-flag-574701
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-574701 -n force-systemd-flag-574701: exit status 6 (451.825774ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 10:05:49.827623 781915 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-574701" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-574701" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-574701" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-574701
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-574701: (2.09332833s)
--- FAIL: TestForceSystemdFlag (508.16s)