=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E0110 08:59:28.841245 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:00:54.652635 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.535383 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.540737 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.551097 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.571502 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.611863 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.692174 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:20.852633 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:21.173259 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:21.814390 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:23.094957 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:25.655200 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:30.775474 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:41.015788 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:02:51.602444 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/addons-010290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:03:01.496125 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:03:42.457333 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:04:28.846643 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/functional-580534/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:05:04.379446 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:07:20.535444 4094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/skaffold-777978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m23.478153773s)
-- stdout --
* [force-systemd-flag-573381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22427
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-573381" primary control-plane node in "force-systemd-flag-573381" cluster
* Pulling base image v0.0.48-1767944074-22401 ...
-- /stdout --
** stderr **
I0110 08:59:08.534283 226492 out.go:360] Setting OutFile to fd 1 ...
I0110 08:59:08.534423 226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:59:08.534431 226492 out.go:374] Setting ErrFile to fd 2...
I0110 08:59:08.534436 226492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:59:08.534716 226492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 08:59:08.535124 226492 out.go:368] Setting JSON to false
I0110 08:59:08.535945 226492 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2501,"bootTime":1768033048,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0110 08:59:08.536012 226492 start.go:143] virtualization:
I0110 08:59:08.540087 226492 out.go:179] * [force-systemd-flag-573381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0110 08:59:08.544100 226492 out.go:179] - MINIKUBE_LOCATION=22427
I0110 08:59:08.544407 226492 notify.go:221] Checking for updates...
I0110 08:59:08.550278 226492 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0110 08:59:08.553314 226492 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
I0110 08:59:08.556418 226492 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
I0110 08:59:08.559460 226492 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0110 08:59:08.562977 226492 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0110 08:59:08.566347 226492 config.go:182] Loaded profile config "force-systemd-env-861581": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:59:08.566466 226492 driver.go:422] Setting default libvirt URI to qemu:///system
I0110 08:59:08.606961 226492 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0110 08:59:08.607065 226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 08:59:08.717444 226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.708565746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 08:59:08.717541 226492 docker.go:319] overlay module found
I0110 08:59:08.721224 226492 out.go:179] * Using the docker driver based on user configuration
I0110 08:59:08.724306 226492 start.go:309] selected driver: docker
I0110 08:59:08.724327 226492 start.go:928] validating driver "docker" against <nil>
I0110 08:59:08.724341 226492 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0110 08:59:08.724965 226492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 08:59:08.818940 226492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 08:59:08.808836061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 08:59:08.819091 226492 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0110 08:59:08.819299 226492 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I0110 08:59:08.821663 226492 out.go:179] * Using Docker driver with root privileges
I0110 08:59:08.824752 226492 cni.go:84] Creating CNI manager for ""
I0110 08:59:08.824824 226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0110 08:59:08.824834 226492 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0110 08:59:08.824913 226492 start.go:353] cluster config:
{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 08:59:08.829730 226492 out.go:179] * Starting "force-systemd-flag-573381" primary control-plane node in "force-systemd-flag-573381" cluster
I0110 08:59:08.832988 226492 cache.go:134] Beginning downloading kic base image for docker with docker
I0110 08:59:08.835974 226492 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
I0110 08:59:08.838696 226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 08:59:08.838738 226492 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I0110 08:59:08.838749 226492 cache.go:65] Caching tarball of preloaded images
I0110 08:59:08.838829 226492 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0110 08:59:08.838837 226492 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I0110 08:59:08.838952 226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
I0110 08:59:08.838969 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json: {Name:mk792ad7b15ee4a35e6dcc78722d34e91cdf2a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:08.839095 226492 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
I0110 08:59:08.864802 226492 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
I0110 08:59:08.864821 226492 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
I0110 08:59:08.864835 226492 cache.go:243] Successfully downloaded all kic artifacts
I0110 08:59:08.864865 226492 start.go:360] acquireMachinesLock for force-systemd-flag-573381: {Name:mk03eb5fbb2bba12d438b336944081d9ef274656 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0110 08:59:08.864956 226492 start.go:364] duration metric: took 76.341µs to acquireMachinesLock for "force-systemd-flag-573381"
I0110 08:59:08.864979 226492 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0110 08:59:08.865046 226492 start.go:125] createHost starting for "" (driver="docker")
I0110 08:59:08.868543 226492 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0110 08:59:08.868782 226492 start.go:159] libmachine.API.Create for "force-systemd-flag-573381" (driver="docker")
I0110 08:59:08.868812 226492 client.go:173] LocalClient.Create starting
I0110 08:59:08.868883 226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem
I0110 08:59:08.868918 226492 main.go:144] libmachine: Decoding PEM data...
I0110 08:59:08.868933 226492 main.go:144] libmachine: Parsing certificate...
I0110 08:59:08.868978 226492 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem
I0110 08:59:08.869002 226492 main.go:144] libmachine: Decoding PEM data...
I0110 08:59:08.869013 226492 main.go:144] libmachine: Parsing certificate...
I0110 08:59:08.869403 226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 08:59:08.885872 226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 08:59:08.885961 226492 network_create.go:284] running [docker network inspect force-systemd-flag-573381] to gather additional debugging logs...
I0110 08:59:08.885976 226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381
W0110 08:59:08.905316 226492 cli_runner.go:211] docker network inspect force-systemd-flag-573381 returned with exit code 1
I0110 08:59:08.905422 226492 network_create.go:287] error running [docker network inspect force-systemd-flag-573381]: docker network inspect force-systemd-flag-573381: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-573381 not found
I0110 08:59:08.905445 226492 network_create.go:289] output of [docker network inspect force-systemd-flag-573381]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-573381 not found
** /stderr **
I0110 08:59:08.905535 226492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 08:59:08.924865 226492 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1cad6f167682 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:2e:00:65:f8:e1} reservation:<nil>}
I0110 08:59:08.925148 226492 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-470266542ec0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:41:d2:db:7c:3c} reservation:<nil>}
I0110 08:59:08.925444 226492 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed6e044af825 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:1d:61:47:90:b1} reservation:<nil>}
I0110 08:59:08.925750 226492 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-322c731839f0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:f9:1c:29:7d:48} reservation:<nil>}
I0110 08:59:08.926117 226492 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a88410}
I0110 08:59:08.926138 226492 network_create.go:124] attempt to create docker network force-systemd-flag-573381 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0110 08:59:08.926194 226492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-573381 force-systemd-flag-573381
I0110 08:59:09.004073 226492 network_create.go:108] docker network force-systemd-flag-573381 192.168.85.0/24 created
I0110 08:59:09.004107 226492 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-573381" container
I0110 08:59:09.004205 226492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0110 08:59:09.022515 226492 cli_runner.go:164] Run: docker volume create force-systemd-flag-573381 --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true
I0110 08:59:09.042894 226492 oci.go:103] Successfully created a docker volume force-systemd-flag-573381
I0110 08:59:09.042990 226492 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-573381-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --entrypoint /usr/bin/test -v force-systemd-flag-573381:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
I0110 08:59:09.628587 226492 oci.go:107] Successfully prepared a docker volume force-systemd-flag-573381
I0110 08:59:09.628655 226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 08:59:09.628667 226492 kic.go:194] Starting extracting preloaded images to volume ...
I0110 08:59:09.628730 226492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
I0110 08:59:12.873367 226492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-573381:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.244512326s)
I0110 08:59:12.873399 226492 kic.go:203] duration metric: took 3.244728311s to extract preloaded images to volume ...
W0110 08:59:12.873534 226492 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0110 08:59:12.873643 226492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0110 08:59:12.964719 226492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-573381 --name force-systemd-flag-573381 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-573381 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-573381 --network force-systemd-flag-573381 --ip 192.168.85.2 --volume force-systemd-flag-573381:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
I0110 08:59:13.335555 226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Running}}
I0110 08:59:13.363137 226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
I0110 08:59:13.385096 226492 cli_runner.go:164] Run: docker exec force-systemd-flag-573381 stat /var/lib/dpkg/alternatives/iptables
I0110 08:59:13.441925 226492 oci.go:144] the created container "force-systemd-flag-573381" has a running status.
I0110 08:59:13.441953 226492 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa...
I0110 08:59:13.817711 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0110 08:59:13.817809 226492 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0110 08:59:13.849514 226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
I0110 08:59:13.876467 226492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0110 08:59:13.876490 226492 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-573381 chown docker:docker /home/docker/.ssh/authorized_keys]
I0110 08:59:13.967478 226492 cli_runner.go:164] Run: docker container inspect force-systemd-flag-573381 --format={{.State.Status}}
I0110 08:59:14.002485 226492 machine.go:94] provisionDockerMachine start ...
I0110 08:59:14.002580 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:14.031458 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:14.031817 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:14.031827 226492 main.go:144] libmachine: About to run SSH command:
hostname
I0110 08:59:14.032463 226492 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55256->127.0.0.1:33002: read: connection reset by peer
I0110 08:59:17.189005 226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
I0110 08:59:17.189034 226492 ubuntu.go:182] provisioning hostname "force-systemd-flag-573381"
I0110 08:59:17.189096 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:17.213646 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:17.213955 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:17.213988 226492 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-573381 && echo "force-systemd-flag-573381" | sudo tee /etc/hostname
I0110 08:59:17.393000 226492 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-573381
I0110 08:59:17.393073 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:17.417619 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:17.417930 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:17.417946 226492 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-573381' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-573381/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-573381' | sudo tee -a /etc/hosts;
fi
fi
I0110 08:59:17.577322 226492 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0110 08:59:17.577379 226492 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2299/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2299/.minikube}
I0110 08:59:17.577405 226492 ubuntu.go:190] setting up certificates
I0110 08:59:17.577415 226492 provision.go:84] configureAuth start
I0110 08:59:17.577472 226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
I0110 08:59:17.603411 226492 provision.go:143] copyHostCerts
I0110 08:59:17.603458 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
I0110 08:59:17.603498 226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem, removing ...
I0110 08:59:17.603505 226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem
I0110 08:59:17.603594 226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/ca.pem (1082 bytes)
I0110 08:59:17.603679 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
I0110 08:59:17.603697 226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem, removing ...
I0110 08:59:17.603701 226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem
I0110 08:59:17.603727 226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/cert.pem (1123 bytes)
I0110 08:59:17.603777 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
I0110 08:59:17.603792 226492 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem, removing ...
I0110 08:59:17.603796 226492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem
I0110 08:59:17.603818 226492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2299/.minikube/key.pem (1679 bytes)
I0110 08:59:17.603870 226492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-573381 san=[127.0.0.1 192.168.85.2 force-systemd-flag-573381 localhost minikube]
I0110 08:59:18.101227 226492 provision.go:177] copyRemoteCerts
I0110 08:59:18.101309 226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0110 08:59:18.101374 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:18.120236 226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
I0110 08:59:18.228191 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0110 08:59:18.228270 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0110 08:59:18.252222 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem -> /etc/docker/server.pem
I0110 08:59:18.252289 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0110 08:59:18.276205 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0110 08:59:18.276272 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0110 08:59:18.301241 226492 provision.go:87] duration metric: took 723.793723ms to configureAuth
I0110 08:59:18.301273 226492 ubuntu.go:206] setting minikube options for container-runtime
I0110 08:59:18.301552 226492 config.go:182] Loaded profile config "force-systemd-flag-573381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 08:59:18.301635 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:18.333060 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:18.333475 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:18.333499 226492 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0110 08:59:18.486799 226492 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I0110 08:59:18.486870 226492 ubuntu.go:71] root file system type: overlay
I0110 08:59:18.487027 226492 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0110 08:59:18.487127 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:18.522677 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:18.522986 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:18.523069 226492 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0110 08:59:18.721846 226492 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I0110 08:59:18.721925 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:18.755481 226492 main.go:144] libmachine: Using SSH client type: native
I0110 08:59:18.755783 226492 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33002 <nil> <nil>}
I0110 08:59:18.755815 226492 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0110 08:59:19.935990 226492 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2026-01-10 08:59:18.711684691 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0110 08:59:19.936016 226492 machine.go:97] duration metric: took 5.933508422s to provisionDockerMachine
I0110 08:59:19.936028 226492 client.go:176] duration metric: took 11.067209235s to LocalClient.Create
I0110 08:59:19.936041 226492 start.go:167] duration metric: took 11.067259614s to libmachine.API.Create "force-systemd-flag-573381"
I0110 08:59:19.936049 226492 start.go:293] postStartSetup for "force-systemd-flag-573381" (driver="docker")
I0110 08:59:19.936059 226492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0110 08:59:19.936120 226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0110 08:59:19.936159 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:19.958962 226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
I0110 08:59:20.074923 226492 ssh_runner.go:195] Run: cat /etc/os-release
I0110 08:59:20.079159 226492 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0110 08:59:20.079189 226492 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0110 08:59:20.079201 226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/addons for local assets ...
I0110 08:59:20.079266 226492 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2299/.minikube/files for local assets ...
I0110 08:59:20.079356 226492 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> 40942.pem in /etc/ssl/certs
I0110 08:59:20.079369 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /etc/ssl/certs/40942.pem
I0110 08:59:20.079482 226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0110 08:59:20.088316 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /etc/ssl/certs/40942.pem (1708 bytes)
I0110 08:59:20.110932 226492 start.go:296] duration metric: took 174.869214ms for postStartSetup
I0110 08:59:20.111307 226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
I0110 08:59:20.129061 226492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/config.json ...
I0110 08:59:20.129339 226492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0110 08:59:20.129450 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:20.146816 226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
I0110 08:59:20.266379 226492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0110 08:59:20.271370 226492 start.go:128] duration metric: took 11.406310013s to createHost
I0110 08:59:20.271395 226492 start.go:83] releasing machines lock for "force-systemd-flag-573381", held for 11.406430793s
I0110 08:59:20.271464 226492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-573381
I0110 08:59:20.288793 226492 ssh_runner.go:195] Run: cat /version.json
I0110 08:59:20.288851 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:20.289074 226492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0110 08:59:20.289133 226492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-573381
I0110 08:59:20.322735 226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
I0110 08:59:20.334868 226492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33002 SSHKeyPath:/home/jenkins/minikube-integration/22427-2299/.minikube/machines/force-systemd-flag-573381/id_rsa Username:docker}
I0110 08:59:20.541956 226492 ssh_runner.go:195] Run: systemctl --version
I0110 08:59:20.549992 226492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0110 08:59:20.556905 226492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0110 08:59:20.556995 226492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0110 08:59:20.586357 226492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0110 08:59:20.586433 226492 start.go:496] detecting cgroup driver to use...
I0110 08:59:20.586462 226492 start.go:500] using "systemd" cgroup driver as enforced via flags
I0110 08:59:20.586586 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0110 08:59:20.601310 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0110 08:59:20.610472 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0110 08:59:20.619345 226492 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I0110 08:59:20.619503 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0110 08:59:20.631919 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 08:59:20.640858 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0110 08:59:20.650267 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 08:59:20.659888 226492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0110 08:59:20.668204 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0110 08:59:20.677415 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0110 08:59:20.688627 226492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0110 08:59:20.697816 226492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0110 08:59:20.705665 226492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0110 08:59:20.713436 226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 08:59:20.851878 226492 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0110 08:59:20.975957 226492 start.go:496] detecting cgroup driver to use...
I0110 08:59:20.976035 226492 start.go:500] using "systemd" cgroup driver as enforced via flags
I0110 08:59:20.976120 226492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0110 08:59:20.994585 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0110 08:59:21.015980 226492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0110 08:59:21.047963 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0110 08:59:21.061003 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0110 08:59:21.076487 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0110 08:59:21.092674 226492 ssh_runner.go:195] Run: which cri-dockerd
I0110 08:59:21.096718 226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0110 08:59:21.104845 226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I0110 08:59:21.119518 226492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0110 08:59:21.267305 226492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0110 08:59:21.412794 226492 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I0110 08:59:21.412940 226492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I0110 08:59:21.428668 226492 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I0110 08:59:21.442271 226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 08:59:21.585985 226492 ssh_runner.go:195] Run: sudo systemctl restart docker
I0110 08:59:22.079009 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0110 08:59:22.093689 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0110 08:59:22.109192 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0110 08:59:22.124141 226492 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0110 08:59:22.285826 226492 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0110 08:59:22.470044 226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 08:59:22.631147 226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0110 08:59:22.649887 226492 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I0110 08:59:22.664808 226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 08:59:22.817595 226492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0110 08:59:22.901926 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0110 08:59:22.921322 226492 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0110 08:59:22.921557 226492 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0110 08:59:22.926346 226492 start.go:574] Will wait 60s for crictl version
I0110 08:59:22.926464 226492 ssh_runner.go:195] Run: which crictl
I0110 08:59:22.930949 226492 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0110 08:59:22.967399 226492 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I0110 08:59:22.967545 226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0110 08:59:23.013575 226492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0110 08:59:23.047281 226492 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I0110 08:59:23.047431 226492 cli_runner.go:164] Run: docker network inspect force-systemd-flag-573381 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 08:59:23.066948 226492 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0110 08:59:23.071229 226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 08:59:23.080762 226492 kubeadm.go:884] updating cluster {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0110 08:59:23.080873 226492 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 08:59:23.080927 226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0110 08:59:23.099976 226492 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0110 08:59:23.099997 226492 docker.go:624] Images already preloaded, skipping extraction
I0110 08:59:23.100066 226492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0110 08:59:23.131172 226492 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0110 08:59:23.131194 226492 cache_images.go:86] Images are preloaded, skipping loading
I0110 08:59:23.131204 226492 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
I0110 08:59:23.131305 226492 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-573381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0110 08:59:23.131368 226492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0110 08:59:23.199852 226492 cni.go:84] Creating CNI manager for ""
I0110 08:59:23.199937 226492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0110 08:59:23.199990 226492 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0110 08:59:23.200028 226492 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-573381 NodeName:force-systemd-flag-573381 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0110 08:59:23.200180 226492 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-573381"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0110 08:59:23.200298 226492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0110 08:59:23.208388 226492 binaries.go:51] Found k8s binaries, skipping transfer
I0110 08:59:23.208452 226492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0110 08:59:23.216341 226492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I0110 08:59:23.229196 226492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0110 08:59:23.241814 226492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I0110 08:59:23.255178 226492 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0110 08:59:23.258978 226492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 08:59:23.269270 226492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 08:59:23.403518 226492 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0110 08:59:23.422001 226492 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381 for IP: 192.168.85.2
I0110 08:59:23.422072 226492 certs.go:195] generating shared ca certs ...
I0110 08:59:23.422112 226492 certs.go:227] acquiring lock for ca certs: {Name:mk8055241a73ed80e6751b465b7d27c66c028c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:23.422308 226492 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key
I0110 08:59:23.422375 226492 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key
I0110 08:59:23.422398 226492 certs.go:257] generating profile certs ...
I0110 08:59:23.422483 226492 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key
I0110 08:59:23.422517 226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt with IP's: []
I0110 08:59:23.559653 226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt ...
I0110 08:59:23.559734 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.crt: {Name:mkcd1531c8c1d18ccd6c5fe039b9f1900cfb2c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:23.559957 226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key ...
I0110 08:59:23.559993 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/client.key: {Name:mk3d7418d4f308035237fc3f9abca77e176904a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:23.560151 226492 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88
I0110 08:59:23.560193 226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0110 08:59:23.877470 226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 ...
I0110 08:59:23.877541 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88: {Name:mk908398532d92633125c591bd292afec3cf2db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:23.877769 226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 ...
I0110 08:59:23.877802 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88: {Name:mk0bd2f2259a70d86d7ac055c0b2e17ebe7e9105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:23.877941 226492 certs.go:382] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt
I0110 08:59:23.878079 226492 certs.go:386] copying /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key.96bd0e88 -> /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key
I0110 08:59:23.878168 226492 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key
I0110 08:59:23.878219 226492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt with IP's: []
I0110 08:59:24.034669 226492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt ...
I0110 08:59:24.034725 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt: {Name:mkf9293bc335f7385742865bf35c11d43e999969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:24.034928 226492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key ...
I0110 08:59:24.034967 226492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key: {Name:mk223176e848184d582c970ee99983183f6c07ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 08:59:24.035099 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0110 08:59:24.035145 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0110 08:59:24.035175 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0110 08:59:24.035220 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0110 08:59:24.035255 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0110 08:59:24.035287 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0110 08:59:24.035332 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0110 08:59:24.035369 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0110 08:59:24.035459 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem (1338 bytes)
W0110 08:59:24.035533 226492 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094_empty.pem, impossibly tiny 0 bytes
I0110 08:59:24.035575 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca-key.pem (1675 bytes)
I0110 08:59:24.035631 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/ca.pem (1082 bytes)
I0110 08:59:24.035696 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/cert.pem (1123 bytes)
I0110 08:59:24.035747 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/key.pem (1679 bytes)
I0110 08:59:24.035834 226492 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem (1708 bytes)
I0110 08:59:24.035892 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem -> /usr/share/ca-certificates/40942.pem
I0110 08:59:24.035944 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0110 08:59:24.035978 226492 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem -> /usr/share/ca-certificates/4094.pem
I0110 08:59:24.036583 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0110 08:59:24.054568 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0110 08:59:24.076860 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0110 08:59:24.095940 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0110 08:59:24.118670 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0110 08:59:24.138950 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0110 08:59:24.164637 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0110 08:59:24.184864 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/force-systemd-flag-573381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0110 08:59:24.209003 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/files/etc/ssl/certs/40942.pem --> /usr/share/ca-certificates/40942.pem (1708 bytes)
I0110 08:59:24.231074 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0110 08:59:24.255151 226492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2299/.minikube/certs/4094.pem --> /usr/share/ca-certificates/4094.pem (1338 bytes)
I0110 08:59:24.278140 226492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0110 08:59:24.295203 226492 ssh_runner.go:195] Run: openssl version
I0110 08:59:24.301854 226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40942.pem
I0110 08:59:24.309375 226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40942.pem /etc/ssl/certs/40942.pem
I0110 08:59:24.318264 226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40942.pem
I0110 08:59:24.324783 226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:26 /usr/share/ca-certificates/40942.pem
I0110 08:59:24.324855 226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40942.pem
I0110 08:59:24.372891 226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0110 08:59:24.381791 226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40942.pem /etc/ssl/certs/3ec20f2e.0
I0110 08:59:24.390489 226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0110 08:59:24.398828 226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0110 08:59:24.407913 226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0110 08:59:24.412372 226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
I0110 08:59:24.412452 226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0110 08:59:24.473682 226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0110 08:59:24.483843 226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0110 08:59:24.492073 226492 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4094.pem
I0110 08:59:24.499221 226492 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4094.pem /etc/ssl/certs/4094.pem
I0110 08:59:24.506976 226492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4094.pem
I0110 08:59:24.511249 226492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:26 /usr/share/ca-certificates/4094.pem
I0110 08:59:24.511365 226492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4094.pem
I0110 08:59:24.552759 226492 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0110 08:59:24.560412 226492 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4094.pem /etc/ssl/certs/51391683.0
I0110 08:59:24.569027 226492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0110 08:59:24.572610 226492 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0110 08:59:24.572709 226492 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-573381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-573381 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 08:59:24.572848 226492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0110 08:59:24.589197 226492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0110 08:59:24.597153 226492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0110 08:59:24.604880 226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0110 08:59:24.604945 226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0110 08:59:24.612455 226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0110 08:59:24.612476 226492 kubeadm.go:158] found existing configuration files:
I0110 08:59:24.612556 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0110 08:59:24.620165 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0110 08:59:24.620239 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0110 08:59:24.627301 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0110 08:59:24.634604 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0110 08:59:24.634677 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0110 08:59:24.642012 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0110 08:59:24.649868 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0110 08:59:24.649940 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0110 08:59:24.657446 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0110 08:59:24.665005 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0110 08:59:24.665080 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0110 08:59:24.672497 226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0110 08:59:24.714035 226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0110 08:59:24.714212 226492 kubeadm.go:319] [preflight] Running pre-flight checks
I0110 08:59:24.792424 226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0110 08:59:24.792577 226492 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0110 08:59:24.792652 226492 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0110 08:59:24.792740 226492 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0110 08:59:24.792828 226492 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0110 08:59:24.792903 226492 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0110 08:59:24.792981 226492 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0110 08:59:24.793060 226492 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0110 08:59:24.793140 226492 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0110 08:59:24.793217 226492 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0110 08:59:24.793293 226492 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0110 08:59:24.793408 226492 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0110 08:59:24.860290 226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0110 08:59:24.860468 226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0110 08:59:24.860597 226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0110 08:59:24.877774 226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0110 08:59:24.883916 226492 out.go:252] - Generating certificates and keys ...
I0110 08:59:24.884078 226492 kubeadm.go:319] [certs] Using existing ca certificate authority
I0110 08:59:24.884189 226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0110 08:59:25.017207 226492 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0110 08:59:25.505301 226492 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0110 08:59:25.598478 226492 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0110 08:59:25.907160 226492 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0110 08:59:26.177844 226492 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0110 08:59:26.178499 226492 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0110 08:59:26.496023 226492 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0110 08:59:26.496358 226492 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0110 08:59:26.690002 226492 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0110 08:59:27.036356 226492 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0110 08:59:27.401186 226492 kubeadm.go:319] [certs] Generating "sa" key and public key
I0110 08:59:27.401511 226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 08:59:27.640969 226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0110 08:59:27.949614 226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0110 08:59:28.312484 226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 08:59:28.649712 226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 08:59:29.128888 226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 08:59:29.129663 226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 08:59:29.133359 226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0110 08:59:29.136985 226492 out.go:252] - Booting up control plane ...
I0110 08:59:29.137093 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 08:59:29.137176 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 08:59:29.138118 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 08:59:29.186624 226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0110 08:59:29.186957 226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0110 08:59:29.196035 226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0110 08:59:29.196583 226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0110 08:59:29.196880 226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0110 08:59:29.335046 226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0110 08:59:29.335207 226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0110 09:03:29.334568 226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001279732s
I0110 09:03:29.334620 226492 kubeadm.go:319]
I0110 09:03:29.334691 226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:03:29.334725 226492 kubeadm.go:319] - The kubelet is not running
I0110 09:03:29.334838 226492 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:03:29.334843 226492 kubeadm.go:319]
I0110 09:03:29.334951 226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:03:29.334987 226492 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:03:29.335018 226492 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:03:29.335022 226492 kubeadm.go:319]
I0110 09:03:29.338362 226492 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:03:29.338843 226492 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:03:29.339001 226492 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:03:29.339272 226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I0110 09:03:29.339282 226492 kubeadm.go:319]
I0110 09:03:29.339450 226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W0110 09:03:29.339564 226492 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001279732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-573381 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001279732s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I0110 09:03:29.339670 226492 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0110 09:03:29.762408 226492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0110 09:03:29.775647 226492 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0110 09:03:29.775764 226492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0110 09:03:29.783284 226492 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0110 09:03:29.783304 226492 kubeadm.go:158] found existing configuration files:
I0110 09:03:29.783360 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0110 09:03:29.790865 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0110 09:03:29.790931 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0110 09:03:29.798651 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0110 09:03:29.806487 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0110 09:03:29.806554 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0110 09:03:29.813908 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0110 09:03:29.821677 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0110 09:03:29.821788 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0110 09:03:29.829171 226492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0110 09:03:29.836791 226492 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0110 09:03:29.836888 226492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0110 09:03:29.844589 226492 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0110 09:03:29.883074 226492 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0110 09:03:29.883137 226492 kubeadm.go:319] [preflight] Running pre-flight checks
I0110 09:03:29.995124 226492 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0110 09:03:29.995217 226492 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0110 09:03:29.995281 226492 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0110 09:03:29.995380 226492 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0110 09:03:29.995459 226492 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0110 09:03:29.995536 226492 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0110 09:03:29.995609 226492 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0110 09:03:29.995686 226492 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0110 09:03:29.995789 226492 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0110 09:03:29.995871 226492 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0110 09:03:29.995956 226492 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0110 09:03:29.996048 226492 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0110 09:03:30.094129 226492 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0110 09:03:30.094508 226492 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0110 09:03:30.094661 226492 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0110 09:03:30.113829 226492 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0110 09:03:30.119048 226492 out.go:252] - Generating certificates and keys ...
I0110 09:03:30.119164 226492 kubeadm.go:319] [certs] Using existing ca certificate authority
I0110 09:03:30.119263 226492 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0110 09:03:30.119389 226492 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0110 09:03:30.119469 226492 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I0110 09:03:30.119568 226492 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I0110 09:03:30.119637 226492 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I0110 09:03:30.119720 226492 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I0110 09:03:30.119798 226492 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I0110 09:03:30.119888 226492 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0110 09:03:30.119990 226492 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0110 09:03:30.120045 226492 kubeadm.go:319] [certs] Using the existing "sa" key
I0110 09:03:30.120121 226492 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 09:03:30.292257 226492 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0110 09:03:30.550762 226492 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0110 09:03:30.719598 226492 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 09:03:30.988775 226492 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 09:03:31.135675 226492 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 09:03:31.136918 226492 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 09:03:31.141259 226492 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0110 09:03:31.144663 226492 out.go:252] - Booting up control plane ...
I0110 09:03:31.144774 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 09:03:31.144862 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 09:03:31.145855 226492 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 09:03:31.166964 226492 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0110 09:03:31.167098 226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0110 09:03:31.174610 226492 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0110 09:03:31.175019 226492 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0110 09:03:31.175233 226492 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0110 09:03:31.309599 226492 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0110 09:03:31.309777 226492 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0110 09:07:31.312855 226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001072153s
I0110 09:07:31.312882 226492 kubeadm.go:319]
I0110 09:07:31.312939 226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:07:31.312973 226492 kubeadm.go:319] - The kubelet is not running
I0110 09:07:31.313078 226492 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:07:31.313082 226492 kubeadm.go:319]
I0110 09:07:31.313187 226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:07:31.313219 226492 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:07:31.313250 226492 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:07:31.313254 226492 kubeadm.go:319]
I0110 09:07:31.318635 226492 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:07:31.319089 226492 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:07:31.319205 226492 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:07:31.319497 226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0110 09:07:31.319504 226492 kubeadm.go:319]
I0110 09:07:31.319742 226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0110 09:07:31.319823 226492 kubeadm.go:403] duration metric: took 8m6.74711775s to StartCluster
I0110 09:07:31.319867 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0110 09:07:31.319926 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0110 09:07:31.381554 226492 cri.go:96] found id: ""
I0110 09:07:31.381641 226492 logs.go:282] 0 containers: []
W0110 09:07:31.381665 226492 logs.go:284] No container was found matching "kube-apiserver"
I0110 09:07:31.381700 226492 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0110 09:07:31.381782 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0110 09:07:31.418897 226492 cri.go:96] found id: ""
I0110 09:07:31.418972 226492 logs.go:282] 0 containers: []
W0110 09:07:31.418995 226492 logs.go:284] No container was found matching "etcd"
I0110 09:07:31.419016 226492 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0110 09:07:31.419107 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0110 09:07:31.483500 226492 cri.go:96] found id: ""
I0110 09:07:31.483590 226492 logs.go:282] 0 containers: []
W0110 09:07:31.483614 226492 logs.go:284] No container was found matching "coredns"
I0110 09:07:31.483658 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0110 09:07:31.483764 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0110 09:07:31.520794 226492 cri.go:96] found id: ""
I0110 09:07:31.520827 226492 logs.go:282] 0 containers: []
W0110 09:07:31.520837 226492 logs.go:284] No container was found matching "kube-scheduler"
I0110 09:07:31.520844 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0110 09:07:31.520902 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0110 09:07:31.566878 226492 cri.go:96] found id: ""
I0110 09:07:31.566900 226492 logs.go:282] 0 containers: []
W0110 09:07:31.566909 226492 logs.go:284] No container was found matching "kube-proxy"
I0110 09:07:31.566915 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0110 09:07:31.566979 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0110 09:07:31.606008 226492 cri.go:96] found id: ""
I0110 09:07:31.606036 226492 logs.go:282] 0 containers: []
W0110 09:07:31.606045 226492 logs.go:284] No container was found matching "kube-controller-manager"
I0110 09:07:31.606052 226492 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0110 09:07:31.606109 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0110 09:07:31.638251 226492 cri.go:96] found id: ""
I0110 09:07:31.638279 226492 logs.go:282] 0 containers: []
W0110 09:07:31.638288 226492 logs.go:284] No container was found matching "kindnet"
I0110 09:07:31.638298 226492 logs.go:123] Gathering logs for container status ...
I0110 09:07:31.638310 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0110 09:07:31.692020 226492 logs.go:123] Gathering logs for kubelet ...
I0110 09:07:31.692050 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0110 09:07:31.772520 226492 logs.go:123] Gathering logs for dmesg ...
I0110 09:07:31.772553 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0110 09:07:31.787224 226492 logs.go:123] Gathering logs for describe nodes ...
I0110 09:07:31.787256 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0110 09:07:31.865808 226492 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:07:31.857753 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.858695 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860406 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860729 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.862213 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0110 09:07:31.857753 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.858695 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860406 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860729 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.862213 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0110 09:07:31.865836 226492 logs.go:123] Gathering logs for Docker ...
I0110 09:07:31.865851 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W0110 09:07:31.891246 226492 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.891318 226492 out.go:285] *
*
W0110 09:07:31.891479 226492 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.891501 226492 out.go:285] *
*
W0110 09:07:31.891823 226492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0110 09:07:31.898116 226492 out.go:203]
W0110 09:07:31.901143 226492 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.901204 226492 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0110 09:07:31.901332 226492 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0110 09:07:31.905173 226492 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-573381 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 09:07:32.591239659 +0000 UTC m=+2835.060327617
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-573381
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-573381:
-- stdout --
[
{
"Id": "ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532",
"Created": "2026-01-10T08:59:12.979495423Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 227295,
"ExitCode": 0,
"Error": "",
"StartedAt": "2026-01-10T08:59:13.052217169Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
"ResolvConfPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/hostname",
"HostsPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/hosts",
"LogPath": "/var/lib/docker/containers/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532/ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532-json.log",
"Name": "/force-systemd-flag-573381",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-573381:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-573381",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "ca7fb38a76639cbbc47d9aa622932d779fc9cf4d7e9d9b996c6df10f73382532",
"LowerDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837-init/diff:/var/lib/docker/overlay2/248ee347a986ccd1655df91e733f088b104cf9846d12889b06882322d682136d/diff",
"MergedDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/merged",
"UpperDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/diff",
"WorkDir": "/var/lib/docker/overlay2/d181d502cf875700ab20a6c556e307a4a116b930b81596738f9a29b9a10df837/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-573381",
"Source": "/var/lib/docker/volumes/force-systemd-flag-573381/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-573381",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-573381",
"name.minikube.sigs.k8s.io": "force-systemd-flag-573381",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1e263f37d0636bb7dd6f57071b275162fbbc28f89c393acec7c1f5f7f2bb51cd",
"SandboxKey": "/var/run/docker/netns/1e263f37d063",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33002"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33003"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33006"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33004"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33005"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-573381": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "da:96:39:c4:d6:99",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "7be823994696e8ebc8abb040bfaf9ab6d9ad8cae383b9c947fcf664e6de8f1b6",
"EndpointID": "3a9247ceaf35a07d27563c2530bf248b53caeb3db10bfb9883d8b5508563a5b7",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-573381",
"ca7fb38a7663"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-573381 -n force-systemd-flag-573381
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-573381 -n force-systemd-flag-573381: exit status 6 (397.390025ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0110 09:07:32.994400 239446 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-573381" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-573381 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ -p cilium-632912 sudo cat /var/lib/kubelet/config.yaml │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl status docker --all --full --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl cat docker --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cat /etc/docker/daemon.json │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo docker system info │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl status cri-docker --all --full --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl cat cri-docker --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cri-dockerd --version │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl status containerd --all --full --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl cat containerd --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cat /lib/systemd/system/containerd.service │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo cat /etc/containerd/config.toml │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo containerd config dump │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl status crio --all --full --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo systemctl cat crio --no-pager │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ -p cilium-632912 sudo crio config │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ delete │ -p cilium-632912 │ cilium-632912 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ 10 Jan 26 08:59 UTC │
│ start │ -p force-systemd-flag-573381 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-flag-573381 │ jenkins │ v1.37.0 │ 10 Jan 26 08:59 UTC │ │
│ ssh │ force-systemd-env-861581 ssh docker info --format {{.CgroupDriver}} │ force-systemd-env-861581 │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
│ delete │ -p force-systemd-env-861581 │ force-systemd-env-861581 │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
│ start │ -p docker-flags-543601 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ docker-flags-543601 │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ │
│ ssh │ force-systemd-flag-573381 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-573381 │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2026/01/10 09:07:30
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0110 09:07:30.577954 238959 out.go:360] Setting OutFile to fd 1 ...
I0110 09:07:30.578077 238959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:07:30.578098 238959 out.go:374] Setting ErrFile to fd 2...
I0110 09:07:30.578103 238959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:07:30.578356 238959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2299/.minikube/bin
I0110 09:07:30.578782 238959 out.go:368] Setting JSON to false
I0110 09:07:30.579618 238959 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3003,"bootTime":1768033048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0110 09:07:30.579691 238959 start.go:143] virtualization:
I0110 09:07:30.583609 238959 out.go:179] * [docker-flags-543601] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0110 09:07:30.588137 238959 out.go:179] - MINIKUBE_LOCATION=22427
I0110 09:07:30.588195 238959 notify.go:221] Checking for updates...
I0110 09:07:30.591563 238959 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0110 09:07:30.594993 238959 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22427-2299/kubeconfig
I0110 09:07:30.598201 238959 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2299/.minikube
I0110 09:07:30.601415 238959 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0110 09:07:30.604533 238959 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0110 09:07:30.608067 238959 config.go:182] Loaded profile config "force-systemd-flag-573381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 09:07:30.608279 238959 driver.go:422] Setting default libvirt URI to qemu:///system
I0110 09:07:30.642980 238959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0110 09:07:30.643091 238959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:07:30.742610 238959 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:07:30.733011256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:07:30.742716 238959 docker.go:319] overlay module found
I0110 09:07:30.746007 238959 out.go:179] * Using the docker driver based on user configuration
I0110 09:07:30.748902 238959 start.go:309] selected driver: docker
I0110 09:07:30.748924 238959 start.go:928] validating driver "docker" against <nil>
I0110 09:07:30.748937 238959 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0110 09:07:30.749743 238959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:07:30.799716 238959 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:07:30.79046822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:07:30.799863 238959 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0110 09:07:30.800089 238959 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I0110 09:07:30.803032 238959 out.go:179] * Using Docker driver with root privileges
I0110 09:07:30.805977 238959 cni.go:84] Creating CNI manager for ""
I0110 09:07:30.806059 238959 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0110 09:07:30.806074 238959 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0110 09:07:30.806162 238959 start.go:353] cluster config:
{Name:docker-flags-543601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-543601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 09:07:30.809341 238959 out.go:179] * Starting "docker-flags-543601" primary control-plane node in "docker-flags-543601" cluster
I0110 09:07:30.812218 238959 cache.go:134] Beginning downloading kic base image for docker with docker
I0110 09:07:30.815434 238959 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
I0110 09:07:30.818384 238959 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 09:07:30.818453 238959 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I0110 09:07:30.818467 238959 cache.go:65] Caching tarball of preloaded images
I0110 09:07:30.818475 238959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
I0110 09:07:30.818557 238959 preload.go:251] Found /home/jenkins/minikube-integration/22427-2299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0110 09:07:30.818567 238959 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I0110 09:07:30.818677 238959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/docker-flags-543601/config.json ...
I0110 09:07:30.818694 238959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2299/.minikube/profiles/docker-flags-543601/config.json: {Name:mkc9ce1b0b1e8d58c1796eb0043a2540bdcf4784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:07:30.838350 238959 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
I0110 09:07:30.838373 238959 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
I0110 09:07:30.838391 238959 cache.go:243] Successfully downloaded all kic artifacts
I0110 09:07:30.838421 238959 start.go:360] acquireMachinesLock for docker-flags-543601: {Name:mk04825a748eadeee6f551dea778247eb4fd7a21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0110 09:07:30.838530 238959 start.go:364] duration metric: took 89.322µs to acquireMachinesLock for "docker-flags-543601"
I0110 09:07:30.838559 238959 start.go:93] Provisioning new machine with config: &{Name:docker-flags-543601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-543601 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0110 09:07:30.838626 238959 start.go:125] createHost starting for "" (driver="docker")
I0110 09:07:31.312855 226492 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001072153s
I0110 09:07:31.312882 226492 kubeadm.go:319]
I0110 09:07:31.312939 226492 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:07:31.312973 226492 kubeadm.go:319] - The kubelet is not running
I0110 09:07:31.313078 226492 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:07:31.313082 226492 kubeadm.go:319]
I0110 09:07:31.313187 226492 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:07:31.313219 226492 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:07:31.313250 226492 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:07:31.313254 226492 kubeadm.go:319]
I0110 09:07:31.318635 226492 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:07:31.319089 226492 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:07:31.319205 226492 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:07:31.319497 226492 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0110 09:07:31.319504 226492 kubeadm.go:319]
I0110 09:07:31.319742 226492 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0110 09:07:31.319823 226492 kubeadm.go:403] duration metric: took 8m6.74711775s to StartCluster
I0110 09:07:31.319867 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0110 09:07:31.319926 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0110 09:07:31.381554 226492 cri.go:96] found id: ""
I0110 09:07:31.381641 226492 logs.go:282] 0 containers: []
W0110 09:07:31.381665 226492 logs.go:284] No container was found matching "kube-apiserver"
I0110 09:07:31.381700 226492 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0110 09:07:31.381782 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0110 09:07:31.418897 226492 cri.go:96] found id: ""
I0110 09:07:31.418972 226492 logs.go:282] 0 containers: []
W0110 09:07:31.418995 226492 logs.go:284] No container was found matching "etcd"
I0110 09:07:31.419016 226492 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0110 09:07:31.419107 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0110 09:07:31.483500 226492 cri.go:96] found id: ""
I0110 09:07:31.483590 226492 logs.go:282] 0 containers: []
W0110 09:07:31.483614 226492 logs.go:284] No container was found matching "coredns"
I0110 09:07:31.483658 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0110 09:07:31.483764 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0110 09:07:31.520794 226492 cri.go:96] found id: ""
I0110 09:07:31.520827 226492 logs.go:282] 0 containers: []
W0110 09:07:31.520837 226492 logs.go:284] No container was found matching "kube-scheduler"
I0110 09:07:31.520844 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0110 09:07:31.520902 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0110 09:07:31.566878 226492 cri.go:96] found id: ""
I0110 09:07:31.566900 226492 logs.go:282] 0 containers: []
W0110 09:07:31.566909 226492 logs.go:284] No container was found matching "kube-proxy"
I0110 09:07:31.566915 226492 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0110 09:07:31.566979 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0110 09:07:31.606008 226492 cri.go:96] found id: ""
I0110 09:07:31.606036 226492 logs.go:282] 0 containers: []
W0110 09:07:31.606045 226492 logs.go:284] No container was found matching "kube-controller-manager"
I0110 09:07:31.606052 226492 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0110 09:07:31.606109 226492 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0110 09:07:31.638251 226492 cri.go:96] found id: ""
I0110 09:07:31.638279 226492 logs.go:282] 0 containers: []
W0110 09:07:31.638288 226492 logs.go:284] No container was found matching "kindnet"
I0110 09:07:31.638298 226492 logs.go:123] Gathering logs for container status ...
I0110 09:07:31.638310 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0110 09:07:31.692020 226492 logs.go:123] Gathering logs for kubelet ...
I0110 09:07:31.692050 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0110 09:07:31.772520 226492 logs.go:123] Gathering logs for dmesg ...
I0110 09:07:31.772553 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0110 09:07:31.787224 226492 logs.go:123] Gathering logs for describe nodes ...
I0110 09:07:31.787256 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0110 09:07:31.865808 226492 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:07:31.857753 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.858695 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860406 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860729 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.862213 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0110 09:07:31.857753 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.858695 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860406 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.860729 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:31.862213 5623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0110 09:07:31.865836 226492 logs.go:123] Gathering logs for Docker ...
I0110 09:07:31.865851 226492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W0110 09:07:31.891246 226492 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.891318 226492 out.go:285] *
W0110 09:07:31.891479 226492 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.891501 226492 out.go:285] *
W0110 09:07:31.891823 226492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0110 09:07:31.898116 226492 out.go:203]
W0110 09:07:31.901143 226492 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001072153s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:07:31.901204 226492 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0110 09:07:31.901332 226492 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0110 09:07:31.905173 226492 out.go:203]
==> Docker <==
Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.818754131Z" level=info msg="Restoring containers: start."
Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.833730578Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Jan 10 08:59:21 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:21.853745151Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.018904601Z" level=info msg="Loading containers: done."
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036117918Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036191157Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.036233603Z" level=info msg="Initializing buildkit"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.070227559Z" level=info msg="Completed buildkit initialization"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.075723002Z" level=info msg="Daemon has completed initialization"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.075799671Z" level=info msg="API listen on /var/run/docker.sock"
Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.080306083Z" level=info msg="API listen on /run/docker.sock"
Jan 10 08:59:22 force-systemd-flag-573381 dockerd[1144]: time="2026-01-10T08:59:22.080412505Z" level=info msg="API listen on [::]:2376"
Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Start docker client with request timeout 0s"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Loaded network plugin cni"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Docker cri networking managed by network plugin cni"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Setting cgroupDriver systemd"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Jan 10 08:59:22 force-systemd-flag-573381 cri-dockerd[1428]: time="2026-01-10T08:59:22Z" level=info msg="Start cri-dockerd grpc backend"
Jan 10 08:59:22 force-systemd-flag-573381 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:07:33.665933 5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:33.666625 5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:33.668194 5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:33.668715 5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:07:33.670219 5756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Jan10 08:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014340] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.489012] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.033977] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.807327] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.189402] kauditd_printk_skb: 36 callbacks suppressed
[Jan10 08:46] hrtimer: interrupt took 42078579 ns
==> kernel <==
09:07:33 up 50 min, 0 user, load average: 0.83, 1.12, 1.78
Linux force-systemd-flag-573381 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Jan 10 09:07:29 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:30 force-systemd-flag-573381 kubelet[5537]: E0110 09:07:30.722485 5537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:07:30 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:31 force-systemd-flag-573381 kubelet[5561]: E0110 09:07:31.495793 5561 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:07:31 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:32 force-systemd-flag-573381 kubelet[5630]: E0110 09:07:32.322107 5630 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:07:32 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:07:33 force-systemd-flag-573381 kubelet[5678]: E0110 09:07:33.276274 5678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:07:33 force-systemd-flag-573381 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-573381 -n force-systemd-flag-573381
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-573381 -n force-systemd-flag-573381: exit status 6 (346.781733ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0110 09:07:34.153543 239680 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-573381" does not appear in /home/jenkins/minikube-integration/22427-2299/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-573381" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-573381" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-573381
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-573381: (2.09211065s)
--- FAIL: TestForceSystemdFlag (507.82s)