=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E1229 07:25:02.580056 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:26:49.122709 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.282925 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.288210 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.298645 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.318865 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.359433 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.439876 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.600372 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.921018 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:51.561343 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:52.841971 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:55.402194 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:00.522803 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:10.763346 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:31.244147 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:46.070623 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:12.205907 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:30:02.582133 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:30:34.126680 725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m22.901302907s)
-- stdout --
* [force-systemd-flag-136540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22353
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-136540" primary control-plane node in "force-systemd-flag-136540" cluster
* Pulling base image v0.0.48-1766979815-22353 ...
-- /stdout --
** stderr **
I1229 07:24:31.862836 949749 out.go:360] Setting OutFile to fd 1 ...
I1229 07:24:31.863055 949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:31.863084 949749 out.go:374] Setting ErrFile to fd 2...
I1229 07:24:31.863106 949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:24:31.863378 949749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 07:24:31.863845 949749 out.go:368] Setting JSON to false
I1229 07:24:31.864812 949749 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14821,"bootTime":1766978251,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1229 07:24:31.864951 949749 start.go:143] virtualization:
I1229 07:24:31.867861 949749 out.go:179] * [force-systemd-flag-136540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1229 07:24:31.869825 949749 out.go:179] - MINIKUBE_LOCATION=22353
I1229 07:24:31.869885 949749 notify.go:221] Checking for updates...
I1229 07:24:31.875448 949749 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1229 07:24:31.878231 949749 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
I1229 07:24:31.880884 949749 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
I1229 07:24:31.883938 949749 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1229 07:24:31.887027 949749 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1229 07:24:31.890228 949749 config.go:182] Loaded profile config "force-systemd-env-262325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:31.890373 949749 driver.go:422] Setting default libvirt URI to qemu:///system
I1229 07:24:31.923367 949749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1229 07:24:31.923482 949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:24:32.003280 949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:31.993283051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:24:32.003399 949749 docker.go:319] overlay module found
I1229 07:24:32.006854 949749 out.go:179] * Using the docker driver based on user configuration
I1229 07:24:32.009686 949749 start.go:309] selected driver: docker
I1229 07:24:32.009709 949749 start.go:928] validating driver "docker" against <nil>
I1229 07:24:32.009723 949749 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1229 07:24:32.010422 949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:24:32.093914 949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:32.084018482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:24:32.094069 949749 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1229 07:24:32.094295 949749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1229 07:24:32.097347 949749 out.go:179] * Using Docker driver with root privileges
I1229 07:24:32.100108 949749 cni.go:84] Creating CNI manager for ""
I1229 07:24:32.100218 949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1229 07:24:32.100231 949749 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1229 07:24:32.100307 949749 start.go:353] cluster config:
{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:24:32.103339 949749 out.go:179] * Starting "force-systemd-flag-136540" primary control-plane node in "force-systemd-flag-136540" cluster
I1229 07:24:32.106301 949749 cache.go:134] Beginning downloading kic base image for docker with docker
I1229 07:24:32.109381 949749 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
I1229 07:24:32.112189 949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 07:24:32.112257 949749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1229 07:24:32.112273 949749 cache.go:65] Caching tarball of preloaded images
I1229 07:24:32.112370 949749 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1229 07:24:32.112387 949749 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1229 07:24:32.112504 949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
I1229 07:24:32.112529 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json: {Name:mkd5ba600f81117204cfd1742166eccffeab192c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:32.112704 949749 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
I1229 07:24:32.142727 949749 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
I1229 07:24:32.142753 949749 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
I1229 07:24:32.142768 949749 cache.go:243] Successfully downloaded all kic artifacts
I1229 07:24:32.142799 949749 start.go:360] acquireMachinesLock for force-systemd-flag-136540: {Name:mk4472157db195a18f5d219cb5373fd9e5bc1c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:24:32.142903 949749 start.go:364] duration metric: took 83.87µs to acquireMachinesLock for "force-systemd-flag-136540"
I1229 07:24:32.142934 949749 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1229 07:24:32.143011 949749 start.go:125] createHost starting for "" (driver="docker")
I1229 07:24:32.146413 949749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1229 07:24:32.146645 949749 start.go:159] libmachine.API.Create for "force-systemd-flag-136540" (driver="docker")
I1229 07:24:32.146676 949749 client.go:173] LocalClient.Create starting
I1229 07:24:32.146732 949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
I1229 07:24:32.146774 949749 main.go:144] libmachine: Decoding PEM data...
I1229 07:24:32.146796 949749 main.go:144] libmachine: Parsing certificate...
I1229 07:24:32.146850 949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
I1229 07:24:32.146881 949749 main.go:144] libmachine: Decoding PEM data...
I1229 07:24:32.146896 949749 main.go:144] libmachine: Parsing certificate...
I1229 07:24:32.147267 949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:24:32.184241 949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:24:32.184329 949749 network_create.go:284] running [docker network inspect force-systemd-flag-136540] to gather additional debugging logs...
I1229 07:24:32.184347 949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540
W1229 07:24:32.202472 949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 returned with exit code 1
I1229 07:24:32.202500 949749 network_create.go:287] error running [docker network inspect force-systemd-flag-136540]: docker network inspect force-systemd-flag-136540: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-136540 not found
I1229 07:24:32.202514 949749 network_create.go:289] output of [docker network inspect force-systemd-flag-136540]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-136540 not found
** /stderr **
I1229 07:24:32.202606 949749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:24:32.225877 949749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
I1229 07:24:32.226204 949749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
I1229 07:24:32.226527 949749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
I1229 07:24:32.226688 949749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-79307d27fbf3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:05:93:d6:4a:c7} reservation:<nil>}
I1229 07:24:32.227128 949749 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58010}
I1229 07:24:32.227147 949749 network_create.go:124] attempt to create docker network force-systemd-flag-136540 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1229 07:24:32.227210 949749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-136540 force-systemd-flag-136540
I1229 07:24:32.293469 949749 network_create.go:108] docker network force-systemd-flag-136540 192.168.85.0/24 created
I1229 07:24:32.293514 949749 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-136540" container
I1229 07:24:32.293586 949749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1229 07:24:32.309969 949749 cli_runner.go:164] Run: docker volume create force-systemd-flag-136540 --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true
I1229 07:24:32.342891 949749 oci.go:103] Successfully created a docker volume force-systemd-flag-136540
I1229 07:24:32.343001 949749 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-136540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --entrypoint /usr/bin/test -v force-systemd-flag-136540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
I1229 07:24:32.956540 949749 oci.go:107] Successfully prepared a docker volume force-systemd-flag-136540
I1229 07:24:32.956596 949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 07:24:32.956607 949749 kic.go:194] Starting extracting preloaded images to volume ...
I1229 07:24:32.956681 949749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
I1229 07:24:36.453768 949749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.49703133s)
I1229 07:24:36.453806 949749 kic.go:203] duration metric: took 3.497195297s to extract preloaded images to volume ...
W1229 07:24:36.453940 949749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1229 07:24:36.454069 949749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1229 07:24:36.553908 949749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-136540 --name force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-136540 --network force-systemd-flag-136540 --ip 192.168.85.2 --volume force-systemd-flag-136540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
I1229 07:24:36.921885 949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Running}}
I1229 07:24:36.949531 949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
I1229 07:24:36.977208 949749 cli_runner.go:164] Run: docker exec force-systemd-flag-136540 stat /var/lib/dpkg/alternatives/iptables
I1229 07:24:37.043401 949749 oci.go:144] the created container "force-systemd-flag-136540" has a running status.
I1229 07:24:37.043446 949749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa...
I1229 07:24:37.613435 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1229 07:24:37.613488 949749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1229 07:24:37.645753 949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
I1229 07:24:37.677430 949749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1229 07:24:37.677450 949749 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-136540 chown docker:docker /home/docker/.ssh/authorized_keys]
I1229 07:24:37.757532 949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
I1229 07:24:37.783838 949749 machine.go:94] provisionDockerMachine start ...
I1229 07:24:37.783940 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:37.816369 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:37.816708 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:37.816718 949749 main.go:144] libmachine: About to run SSH command:
hostname
I1229 07:24:37.817297 949749 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48280->127.0.0.1:33762: read: connection reset by peer
I1229 07:24:40.967978 949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
I1229 07:24:40.968004 949749 ubuntu.go:182] provisioning hostname "force-systemd-flag-136540"
I1229 07:24:40.968074 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:40.986787 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:40.987162 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:40.987185 949749 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-136540 && echo "force-systemd-flag-136540" | sudo tee /etc/hostname
I1229 07:24:41.155636 949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
I1229 07:24:41.155724 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:41.177733 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:41.178031 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:41.178048 949749 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-136540' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-136540/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-136540' | sudo tee -a /etc/hosts;
fi
fi
I1229 07:24:41.332316 949749 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1229 07:24:41.332339 949749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
I1229 07:24:41.332371 949749 ubuntu.go:190] setting up certificates
I1229 07:24:41.332381 949749 provision.go:84] configureAuth start
I1229 07:24:41.332439 949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
I1229 07:24:41.349068 949749 provision.go:143] copyHostCerts
I1229 07:24:41.349109 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
I1229 07:24:41.349165 949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
I1229 07:24:41.349180 949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
I1229 07:24:41.349258 949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
I1229 07:24:41.349344 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
I1229 07:24:41.349367 949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
I1229 07:24:41.349374 949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
I1229 07:24:41.349400 949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
I1229 07:24:41.349453 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
I1229 07:24:41.349475 949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
I1229 07:24:41.349480 949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
I1229 07:24:41.349511 949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
I1229 07:24:41.349577 949749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-136540 san=[127.0.0.1 192.168.85.2 force-systemd-flag-136540 localhost minikube]
I1229 07:24:41.546735 949749 provision.go:177] copyRemoteCerts
I1229 07:24:41.546817 949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1229 07:24:41.546861 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:41.566148 949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
I1229 07:24:41.671926 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1229 07:24:41.672027 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1229 07:24:41.689940 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
I1229 07:24:41.690004 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1229 07:24:41.707708 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1229 07:24:41.707770 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1229 07:24:41.725505 949749 provision.go:87] duration metric: took 393.100381ms to configureAuth
I1229 07:24:41.725531 949749 ubuntu.go:206] setting minikube options for container-runtime
I1229 07:24:41.725728 949749 config.go:182] Loaded profile config "force-systemd-flag-136540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:24:41.725782 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:41.743373 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:41.743703 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:41.743713 949749 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1229 07:24:41.897630 949749 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1229 07:24:41.897708 949749 ubuntu.go:71] root file system type: overlay
I1229 07:24:41.897848 949749 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1229 07:24:41.897935 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:41.921519 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:41.921836 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:41.921950 949749 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1229 07:24:42.102668 949749 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1229 07:24:42.102864 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:42.133688 949749 main.go:144] libmachine: Using SSH client type: native
I1229 07:24:42.134051 949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33762 <nil> <nil>}
I1229 07:24:42.134080 949749 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1229 07:24:43.163247 949749 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-12-12 14:49:02.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-12-29 07:24:42.093571384 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1229 07:24:43.163281 949749 machine.go:97] duration metric: took 5.379421515s to provisionDockerMachine
I1229 07:24:43.163293 949749 client.go:176] duration metric: took 11.016607482s to LocalClient.Create
I1229 07:24:43.163321 949749 start.go:167] duration metric: took 11.016676896s to libmachine.API.Create "force-systemd-flag-136540"
I1229 07:24:43.163335 949749 start.go:293] postStartSetup for "force-systemd-flag-136540" (driver="docker")
I1229 07:24:43.163345 949749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1229 07:24:43.163421 949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1229 07:24:43.163475 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:43.181417 949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
I1229 07:24:43.288488 949749 ssh_runner.go:195] Run: cat /etc/os-release
I1229 07:24:43.291782 949749 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1229 07:24:43.291809 949749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1229 07:24:43.291822 949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/addons for local assets ...
I1229 07:24:43.291880 949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/files for local assets ...
I1229 07:24:43.291954 949749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> 7250782.pem in /etc/ssl/certs
I1229 07:24:43.291962 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /etc/ssl/certs/7250782.pem
I1229 07:24:43.292057 949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1229 07:24:43.299384 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /etc/ssl/certs/7250782.pem (1708 bytes)
I1229 07:24:43.317036 949749 start.go:296] duration metric: took 153.684905ms for postStartSetup
I1229 07:24:43.317451 949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
I1229 07:24:43.335322 949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
I1229 07:24:43.335607 949749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1229 07:24:43.335663 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:43.354609 949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
I1229 07:24:43.461171 949749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1229 07:24:43.466044 949749 start.go:128] duration metric: took 11.323009959s to createHost
I1229 07:24:43.466091 949749 start.go:83] releasing machines lock for "force-systemd-flag-136540", held for 11.323168174s
I1229 07:24:43.466184 949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
I1229 07:24:43.483271 949749 ssh_runner.go:195] Run: cat /version.json
I1229 07:24:43.483331 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:43.483583 949749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1229 07:24:43.483648 949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
I1229 07:24:43.504986 949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
I1229 07:24:43.516239 949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
I1229 07:24:43.695447 949749 ssh_runner.go:195] Run: systemctl --version
I1229 07:24:43.701895 949749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1229 07:24:43.706075 949749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1229 07:24:43.706145 949749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1229 07:24:43.733426 949749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1229 07:24:43.733449 949749 start.go:496] detecting cgroup driver to use...
I1229 07:24:43.733462 949749 start.go:500] using "systemd" cgroup driver as enforced via flags
I1229 07:24:43.733554 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1229 07:24:43.747390 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1229 07:24:43.755747 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1229 07:24:43.764296 949749 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1229 07:24:43.764426 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1229 07:24:43.773285 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:24:43.782062 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1229 07:24:43.790627 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:24:43.799083 949749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1229 07:24:43.806872 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1229 07:24:43.815660 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1229 07:24:43.824501 949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1229 07:24:43.833359 949749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1229 07:24:43.840707 949749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1229 07:24:43.847859 949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:24:43.958912 949749 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1229 07:24:44.059070 949749 start.go:496] detecting cgroup driver to use...
I1229 07:24:44.059146 949749 start.go:500] using "systemd" cgroup driver as enforced via flags
I1229 07:24:44.059226 949749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1229 07:24:44.075065 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1229 07:24:44.088639 949749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1229 07:24:44.122930 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1229 07:24:44.137375 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1229 07:24:44.155656 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1229 07:24:44.175473 949749 ssh_runner.go:195] Run: which cri-dockerd
I1229 07:24:44.180371 949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1229 07:24:44.190423 949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1229 07:24:44.205661 949749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1229 07:24:44.321544 949749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1229 07:24:44.440345 949749 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I1229 07:24:44.440462 949749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1229 07:24:44.454047 949749 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1229 07:24:44.466753 949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:24:44.579909 949749 ssh_runner.go:195] Run: sudo systemctl restart docker
I1229 07:24:44.997772 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1229 07:24:45.025871 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1229 07:24:45.048256 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1229 07:24:45.067946 949749 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1229 07:24:45.246433 949749 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1229 07:24:45.394951 949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:24:45.519551 949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1229 07:24:45.535811 949749 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1229 07:24:45.548627 949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:24:45.673698 949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1229 07:24:45.747485 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1229 07:24:45.762101 949749 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1229 07:24:45.762224 949749 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1229 07:24:45.765988 949749 start.go:574] Will wait 60s for crictl version
I1229 07:24:45.766089 949749 ssh_runner.go:195] Run: which crictl
I1229 07:24:45.769514 949749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1229 07:24:45.795220 949749 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.3
RuntimeApiVersion: v1
I1229 07:24:45.795343 949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1229 07:24:45.817012 949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1229 07:24:45.845183 949749 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
I1229 07:24:45.845304 949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:24:45.862014 949749 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1229 07:24:45.865896 949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:24:45.875964 949749 kubeadm.go:884] updating cluster {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1229 07:24:45.876083 949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 07:24:45.876188 949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1229 07:24:45.893986 949749 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1229 07:24:45.894009 949749 docker.go:624] Images already preloaded, skipping extraction
I1229 07:24:45.894075 949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1229 07:24:45.911802 949749 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1229 07:24:45.911829 949749 cache_images.go:86] Images are preloaded, skipping loading
I1229 07:24:45.911839 949749 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
I1229 07:24:45.911933 949749 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-136540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1229 07:24:45.912006 949749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1229 07:24:45.963834 949749 cni.go:84] Creating CNI manager for ""
I1229 07:24:45.963864 949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1229 07:24:45.963922 949749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1229 07:24:45.963952 949749 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-136540 NodeName:force-systemd-flag-136540 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1229 07:24:45.964163 949749 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-136540"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1229 07:24:45.964261 949749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1229 07:24:45.972065 949749 binaries.go:51] Found k8s binaries, skipping transfer
I1229 07:24:45.972197 949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1229 07:24:45.979844 949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1229 07:24:45.992556 949749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1229 07:24:46.006552 949749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I1229 07:24:46.020398 949749 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1229 07:24:46.024230 949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:24:46.035368 949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:24:46.163494 949749 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 07:24:46.184599 949749 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540 for IP: 192.168.85.2
I1229 07:24:46.184618 949749 certs.go:195] generating shared ca certs ...
I1229 07:24:46.184635 949749 certs.go:227] acquiring lock for ca certs: {Name:mk9c2ed6b225eba3a3b373f488351467f747c9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:46.184776 949749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key
I1229 07:24:46.184825 949749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key
I1229 07:24:46.184837 949749 certs.go:257] generating profile certs ...
I1229 07:24:46.184891 949749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key
I1229 07:24:46.184906 949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt with IP's: []
I1229 07:24:46.406421 949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt ...
I1229 07:24:46.406498 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt: {Name:mkeabcc81e93cc9bab177300f214aee09ffb34da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:46.406748 949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key ...
I1229 07:24:46.406796 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key: {Name:mk1d3be86290b8aa5c0871eada27f23610866e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:46.406948 949749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c
I1229 07:24:46.407005 949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1229 07:24:46.644365 949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c ...
I1229 07:24:46.644395 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c: {Name:mk20477dd3211295249f0fd8db3287c9ced07fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:46.644644 949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c ...
I1229 07:24:46.644661 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c: {Name:mk90a993a5735e7ecab2e7be38b0b8ea44299fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:46.644750 949749 certs.go:382] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt
I1229 07:24:46.644835 949749 certs.go:386] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key
I1229 07:24:46.644897 949749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key
I1229 07:24:46.644913 949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt with IP's: []
I1229 07:24:47.026929 949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt ...
I1229 07:24:47.026978 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt: {Name:mk152d5d3beadbce81174a15f580235a4bfefeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:47.027179 949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key ...
I1229 07:24:47.027195 949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key: {Name:mkd3178fa5a3e305677094e64826570746f84993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:24:47.027366 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1229 07:24:47.027396 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1229 07:24:47.027413 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1229 07:24:47.027428 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1229 07:24:47.027440 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1229 07:24:47.027462 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1229 07:24:47.027478 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1229 07:24:47.027488 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1229 07:24:47.027539 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem (1338 bytes)
W1229 07:24:47.027580 949749 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078_empty.pem, impossibly tiny 0 bytes
I1229 07:24:47.027593 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem (1675 bytes)
I1229 07:24:47.027622 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem (1082 bytes)
I1229 07:24:47.027655 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem (1123 bytes)
I1229 07:24:47.027688 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem (1675 bytes)
I1229 07:24:47.027736 949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem (1708 bytes)
I1229 07:24:47.027771 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /usr/share/ca-certificates/7250782.pem
I1229 07:24:47.027789 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1229 07:24:47.027800 949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem -> /usr/share/ca-certificates/725078.pem
I1229 07:24:47.028420 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1229 07:24:47.047819 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1229 07:24:47.066416 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1229 07:24:47.083760 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1229 07:24:47.100871 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1229 07:24:47.118300 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1229 07:24:47.135827 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1229 07:24:47.154223 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1229 07:24:47.171152 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /usr/share/ca-certificates/7250782.pem (1708 bytes)
I1229 07:24:47.188424 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1229 07:24:47.204881 949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem --> /usr/share/ca-certificates/725078.pem (1338 bytes)
I1229 07:24:47.222920 949749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1229 07:24:47.236010 949749 ssh_runner.go:195] Run: openssl version
I1229 07:24:47.242847 949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7250782.pem
I1229 07:24:47.250549 949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7250782.pem /etc/ssl/certs/7250782.pem
I1229 07:24:47.257970 949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7250782.pem
I1229 07:24:47.261605 949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/7250782.pem
I1229 07:24:47.261667 949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7250782.pem
I1229 07:24:47.303672 949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1229 07:24:47.311437 949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7250782.pem /etc/ssl/certs/3ec20f2e.0
I1229 07:24:47.319608 949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1229 07:24:47.327019 949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1229 07:24:47.334490 949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1229 07:24:47.338076 949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
I1229 07:24:47.338184 949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1229 07:24:47.381190 949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1229 07:24:47.388743 949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1229 07:24:47.395955 949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/725078.pem
I1229 07:24:47.403397 949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/725078.pem /etc/ssl/certs/725078.pem
I1229 07:24:47.410817 949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/725078.pem
I1229 07:24:47.414638 949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/725078.pem
I1229 07:24:47.414707 949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/725078.pem
I1229 07:24:47.458494 949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1229 07:24:47.465936 949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/725078.pem /etc/ssl/certs/51391683.0
I1229 07:24:47.473134 949749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1229 07:24:47.476718 949749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1229 07:24:47.476770 949749 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:24:47.476884 949749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1229 07:24:47.493620 949749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1229 07:24:47.502107 949749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1229 07:24:47.509981 949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1229 07:24:47.510046 949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 07:24:47.517804 949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1229 07:24:47.517825 949749 kubeadm.go:158] found existing configuration files:
I1229 07:24:47.517877 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1229 07:24:47.525590 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1229 07:24:47.525674 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1229 07:24:47.532930 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1229 07:24:47.540396 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1229 07:24:47.540486 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 07:24:47.547676 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1229 07:24:47.555165 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1229 07:24:47.555256 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 07:24:47.562475 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1229 07:24:47.570046 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1229 07:24:47.570109 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 07:24:47.577347 949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1229 07:24:47.617344 949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:24:47.617407 949749 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:24:47.711675 949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:24:47.711830 949749 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:24:47.711890 949749 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:24:47.711974 949749 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:24:47.712056 949749 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:24:47.712162 949749 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:24:47.712241 949749 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:24:47.712321 949749 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:24:47.712401 949749 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:24:47.712480 949749 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:24:47.712559 949749 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:24:47.712639 949749 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:24:47.783238 949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:24:47.783386 949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:24:47.783503 949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:24:47.800559 949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:24:47.807021 949749 out.go:252] - Generating certificates and keys ...
I1229 07:24:47.807150 949749 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:24:47.807244 949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:24:48.391180 949749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1229 07:24:48.594026 949749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1229 07:24:48.825994 949749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1229 07:24:49.323806 949749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1229 07:24:49.506950 949749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1229 07:24:49.507188 949749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1229 07:24:49.719847 949749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1229 07:24:49.720093 949749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1229 07:24:50.129385 949749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1229 07:24:50.272350 949749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1229 07:24:50.704674 949749 kubeadm.go:319] [certs] Generating "sa" key and public key
I1229 07:24:50.705019 949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:24:51.089352 949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:24:51.167795 949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:24:51.380140 949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:24:51.696561 949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:24:51.802016 949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:24:51.802726 949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:24:51.805447 949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:24:51.809325 949749 out.go:252] - Booting up control plane ...
I1229 07:24:51.809441 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:24:51.809530 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:24:51.809609 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:24:51.825390 949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:24:51.825876 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:24:51.840218 949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:24:51.840883 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:24:51.841100 949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:24:51.986925 949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:24:51.987097 949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:28:51.986578 949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00003953s
I1229 07:28:51.986615 949749 kubeadm.go:319]
I1229 07:28:51.986711 949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:28:51.986761 949749 kubeadm.go:319] - The kubelet is not running
I1229 07:28:51.986866 949749 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:28:51.986874 949749 kubeadm.go:319]
I1229 07:28:51.986980 949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:28:51.987012 949749 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:28:51.987044 949749 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:28:51.987048 949749 kubeadm.go:319]
I1229 07:28:51.991310 949749 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:28:51.991737 949749 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:28:51.991851 949749 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:28:51.992128 949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1229 07:28:51.992138 949749 kubeadm.go:319]
I1229 07:28:51.992206 949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1229 07:28:51.992360 949749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00003953s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00003953s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1229 07:28:51.992440 949749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I1229 07:28:52.418971 949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1229 07:28:52.431883 949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1229 07:28:52.431947 949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 07:28:52.439564 949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1229 07:28:52.439582 949749 kubeadm.go:158] found existing configuration files:
I1229 07:28:52.439631 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1229 07:28:52.447231 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1229 07:28:52.447294 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1229 07:28:52.454516 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1229 07:28:52.462044 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1229 07:28:52.462110 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 07:28:52.469355 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1229 07:28:52.476888 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1229 07:28:52.476953 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 07:28:52.484710 949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1229 07:28:52.492047 949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1229 07:28:52.492108 949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 07:28:52.499152 949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1229 07:28:52.615409 949749 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:28:52.615841 949749 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:28:52.688523 949749 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:32:54.115439 949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1229 07:32:54.115477 949749 kubeadm.go:319]
I1229 07:32:54.115596 949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1229 07:32:54.120837 949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:32:54.120898 949749 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:32:54.120992 949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:32:54.121051 949749 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:32:54.121090 949749 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:32:54.121140 949749 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:32:54.121192 949749 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:32:54.121243 949749 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:32:54.121296 949749 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:32:54.121348 949749 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:32:54.121401 949749 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:32:54.121451 949749 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:32:54.121504 949749 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:32:54.121554 949749 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:32:54.121630 949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:32:54.121728 949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:32:54.121822 949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:32:54.121888 949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:32:54.125605 949749 out.go:252] - Generating certificates and keys ...
I1229 07:32:54.125713 949749 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:32:54.125788 949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:32:54.125933 949749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1229 07:32:54.126020 949749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1229 07:32:54.126096 949749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1229 07:32:54.126205 949749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1229 07:32:54.126299 949749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1229 07:32:54.126381 949749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1229 07:32:54.126493 949749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1229 07:32:54.126610 949749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1229 07:32:54.126674 949749 kubeadm.go:319] [certs] Using the existing "sa" key
I1229 07:32:54.126770 949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:32:54.126842 949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:32:54.126914 949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:32:54.126977 949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:32:54.127061 949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:32:54.127149 949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:32:54.127304 949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:32:54.127382 949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:32:54.130556 949749 out.go:252] - Booting up control plane ...
I1229 07:32:54.130667 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:32:54.130752 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:32:54.130843 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:32:54.131025 949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:32:54.131132 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:32:54.131248 949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:32:54.131361 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:32:54.131431 949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:32:54.131608 949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:32:54.131770 949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:32:54.131847 949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000601427s
I1229 07:32:54.131856 949749 kubeadm.go:319]
I1229 07:32:54.131930 949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:32:54.131984 949749 kubeadm.go:319] - The kubelet is not running
I1229 07:32:54.132174 949749 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:32:54.132203 949749 kubeadm.go:319]
I1229 07:32:54.132356 949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:32:54.132424 949749 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:32:54.132476 949749 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:32:54.132539 949749 kubeadm.go:319]
I1229 07:32:54.132577 949749 kubeadm.go:403] duration metric: took 8m6.655800799s to StartCluster
I1229 07:32:54.132629 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1229 07:32:54.132713 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1229 07:32:54.187548 949749 cri.go:96] found id: ""
I1229 07:32:54.187637 949749 logs.go:282] 0 containers: []
W1229 07:32:54.187660 949749 logs.go:284] No container was found matching "kube-apiserver"
I1229 07:32:54.187700 949749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1229 07:32:54.187803 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1229 07:32:54.217390 949749 cri.go:96] found id: ""
I1229 07:32:54.217455 949749 logs.go:282] 0 containers: []
W1229 07:32:54.217487 949749 logs.go:284] No container was found matching "etcd"
I1229 07:32:54.217507 949749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1229 07:32:54.217596 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1229 07:32:54.246434 949749 cri.go:96] found id: ""
I1229 07:32:54.246518 949749 logs.go:282] 0 containers: []
W1229 07:32:54.246541 949749 logs.go:284] No container was found matching "coredns"
I1229 07:32:54.246561 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1229 07:32:54.246672 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1229 07:32:54.279822 949749 cri.go:96] found id: ""
I1229 07:32:54.279884 949749 logs.go:282] 0 containers: []
W1229 07:32:54.279914 949749 logs.go:284] No container was found matching "kube-scheduler"
I1229 07:32:54.279933 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1229 07:32:54.280019 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1229 07:32:54.308671 949749 cri.go:96] found id: ""
I1229 07:32:54.308750 949749 logs.go:282] 0 containers: []
W1229 07:32:54.308773 949749 logs.go:284] No container was found matching "kube-proxy"
I1229 07:32:54.308795 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1229 07:32:54.308901 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1229 07:32:54.350922 949749 cri.go:96] found id: ""
I1229 07:32:54.350991 949749 logs.go:282] 0 containers: []
W1229 07:32:54.351031 949749 logs.go:284] No container was found matching "kube-controller-manager"
I1229 07:32:54.351058 949749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1229 07:32:54.351143 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1229 07:32:54.397671 949749 cri.go:96] found id: ""
I1229 07:32:54.397733 949749 logs.go:282] 0 containers: []
W1229 07:32:54.397771 949749 logs.go:284] No container was found matching "kindnet"
I1229 07:32:54.397801 949749 logs.go:123] Gathering logs for container status ...
I1229 07:32:54.397849 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1229 07:32:54.472421 949749 logs.go:123] Gathering logs for kubelet ...
I1229 07:32:54.472498 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1229 07:32:54.546509 949749 logs.go:123] Gathering logs for dmesg ...
I1229 07:32:54.546588 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1229 07:32:54.562327 949749 logs.go:123] Gathering logs for describe nodes ...
I1229 07:32:54.562351 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1229 07:32:54.652514 949749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:32:54.644694 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.645425 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.646973 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.647305 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.648732 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1229 07:32:54.644694 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.645425 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.646973 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.647305 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.648732 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1229 07:32:54.652533 949749 logs.go:123] Gathering logs for Docker ...
I1229 07:32:54.652544 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W1229 07:32:54.684054 949749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.684224 949749 out.go:285] *
*
W1229 07:32:54.684342 949749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.684422 949749 out.go:285] *
*
W1229 07:32:54.684712 949749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1229 07:32:54.690679 949749 out.go:203]
W1229 07:32:54.697342 949749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.697393 949749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1229 07:32:54.697418 949749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1229 07:32:54.700360 949749 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-136540 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-29 07:32:55.288178505 +0000 UTC m=+2799.950051689
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-136540
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-136540:
-- stdout --
[
{
"Id": "a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf",
"Created": "2025-12-29T07:24:36.568532723Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 950337,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-29T07:24:36.646493074Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
"ResolvConfPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/hostname",
"HostsPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/hosts",
"LogPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf-json.log",
"Name": "/force-systemd-flag-136540",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"force-systemd-flag-136540:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-136540",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf",
"LowerDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2-init/diff:/var/lib/docker/overlay2/3788d7c7c8e91fd886b287c15675406ce26d741d5d808d18bcc9c345d38db92c/diff",
"MergedDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/merged",
"UpperDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/diff",
"WorkDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "force-systemd-flag-136540",
"Source": "/var/lib/docker/volumes/force-systemd-flag-136540/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "force-systemd-flag-136540",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-136540",
"name.minikube.sigs.k8s.io": "force-systemd-flag-136540",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "19dfc4568a1bf4473bce23b00c3cd841299796210ad317404f50560cf0e8d9f5",
"SandboxKey": "/var/run/docker/netns/19dfc4568a1b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33762"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33763"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33766"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33764"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33765"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-136540": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ee:bc:b0:83:f6:6d",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "4fb1612089e1045dee558dd90cf8f83fb667f1cf48f8746bd58486f63fc27afa",
"EndpointID": "b10a779d4a752949bc78560da1067368878d70971db45727b187910848bc4948",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-136540",
"a72da115069c"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-136540 -n force-systemd-flag-136540
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-136540 -n force-systemd-flag-136540: exit status 6 (391.02333ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1229 07:32:55.687078 962194 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-136540" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-136540 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ -p cilium-728759 sudo systemctl cat cri-docker --no-pager │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo cri-dockerd --version │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo systemctl status containerd --all --full --no-pager │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo systemctl cat containerd --no-pager │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo cat /lib/systemd/system/containerd.service │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo cat /etc/containerd/config.toml │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo containerd config dump │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo systemctl status crio --all --full --no-pager │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo systemctl cat crio --no-pager │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p cilium-728759 sudo crio config │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ delete │ -p cilium-728759 │ cilium-728759 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
│ start │ -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-env-262325 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-198702 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ stop │ -p NoKubernetes-198702 │ NoKubernetes-198702 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
│ start │ -p NoKubernetes-198702 --driver=docker --container-runtime=docker │ NoKubernetes-198702 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
│ ssh │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-198702 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ delete │ -p NoKubernetes-198702 │ NoKubernetes-198702 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
│ start │ -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-flag-136540 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ │
│ ssh │ force-systemd-env-262325 ssh docker info --format {{.CgroupDriver}} │ force-systemd-env-262325 │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
│ delete │ -p force-systemd-env-262325 │ force-systemd-env-262325 │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
│ start │ -p docker-flags-139514 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ docker-flags-139514 │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ │
│ ssh │ force-systemd-flag-136540 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-136540 │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/29 07:32:44
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1229 07:32:44.762208 960427 out.go:360] Setting OutFile to fd 1 ...
I1229 07:32:44.762395 960427 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:32:44.762429 960427 out.go:374] Setting ErrFile to fd 2...
I1229 07:32:44.762449 960427 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:32:44.762840 960427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 07:32:44.763428 960427 out.go:368] Setting JSON to false
I1229 07:32:44.764429 960427 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15314,"bootTime":1766978251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1229 07:32:44.764547 960427 start.go:143] virtualization:
I1229 07:32:44.768188 960427 out.go:179] * [docker-flags-139514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1229 07:32:44.772528 960427 out.go:179] - MINIKUBE_LOCATION=22353
I1229 07:32:44.772620 960427 notify.go:221] Checking for updates...
I1229 07:32:44.778947 960427 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1229 07:32:44.782241 960427 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
I1229 07:32:44.785320 960427 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
I1229 07:32:44.788315 960427 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1229 07:32:44.791329 960427 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1229 07:32:44.794843 960427 config.go:182] Loaded profile config "force-systemd-flag-136540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:32:44.794959 960427 driver.go:422] Setting default libvirt URI to qemu:///system
I1229 07:32:44.821843 960427 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1229 07:32:44.821993 960427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:32:44.881006 960427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:32:44.872206051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:32:44.881115 960427 docker.go:319] overlay module found
I1229 07:32:44.884440 960427 out.go:179] * Using the docker driver based on user configuration
I1229 07:32:44.887362 960427 start.go:309] selected driver: docker
I1229 07:32:44.887379 960427 start.go:928] validating driver "docker" against <nil>
I1229 07:32:44.887393 960427 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1229 07:32:44.888202 960427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:32:44.936485 960427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:32:44.927869991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:32:44.936638 960427 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1229 07:32:44.936864 960427 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I1229 07:32:44.939829 960427 out.go:179] * Using Docker driver with root privileges
I1229 07:32:44.942698 960427 cni.go:84] Creating CNI manager for ""
I1229 07:32:44.942772 960427 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1229 07:32:44.942786 960427 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1229 07:32:44.942862 960427 start.go:353] cluster config:
{Name:docker-flags-139514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-139514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:32:44.946038 960427 out.go:179] * Starting "docker-flags-139514" primary control-plane node in "docker-flags-139514" cluster
I1229 07:32:44.948914 960427 cache.go:134] Beginning downloading kic base image for docker with docker
I1229 07:32:44.951839 960427 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
I1229 07:32:44.954666 960427 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 07:32:44.954725 960427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I1229 07:32:44.954738 960427 cache.go:65] Caching tarball of preloaded images
I1229 07:32:44.954763 960427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
I1229 07:32:44.954823 960427 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1229 07:32:44.954834 960427 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I1229 07:32:44.954947 960427 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/docker-flags-139514/config.json ...
I1229 07:32:44.954964 960427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/docker-flags-139514/config.json: {Name:mk7699ffe52c13d2bb58206a9cb556baefbeb6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:44.974861 960427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
I1229 07:32:44.974883 960427 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
I1229 07:32:44.974898 960427 cache.go:243] Successfully downloaded all kic artifacts
I1229 07:32:44.974982 960427 start.go:360] acquireMachinesLock for docker-flags-139514: {Name:mk2ea4414d7cf67a9e64fe0d2913f314c869f3a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:32:44.975169 960427 start.go:364] duration metric: took 126.643µs to acquireMachinesLock for "docker-flags-139514"
I1229 07:32:44.975211 960427 start.go:93] Provisioning new machine with config: &{Name:docker-flags-139514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-139514 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1229 07:32:44.975341 960427 start.go:125] createHost starting for "" (driver="docker")
I1229 07:32:44.979442 960427 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1229 07:32:44.979683 960427 start.go:159] libmachine.API.Create for "docker-flags-139514" (driver="docker")
I1229 07:32:44.979720 960427 client.go:173] LocalClient.Create starting
I1229 07:32:44.979810 960427 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
I1229 07:32:44.979856 960427 main.go:144] libmachine: Decoding PEM data...
I1229 07:32:44.979874 960427 main.go:144] libmachine: Parsing certificate...
I1229 07:32:44.979928 960427 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
I1229 07:32:44.979950 960427 main.go:144] libmachine: Decoding PEM data...
I1229 07:32:44.979962 960427 main.go:144] libmachine: Parsing certificate...
I1229 07:32:44.980355 960427 cli_runner.go:164] Run: docker network inspect docker-flags-139514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:32:44.996046 960427 cli_runner.go:211] docker network inspect docker-flags-139514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:32:44.996154 960427 network_create.go:284] running [docker network inspect docker-flags-139514] to gather additional debugging logs...
I1229 07:32:44.996176 960427 cli_runner.go:164] Run: docker network inspect docker-flags-139514
W1229 07:32:45.041911 960427 cli_runner.go:211] docker network inspect docker-flags-139514 returned with exit code 1
I1229 07:32:45.041946 960427 network_create.go:287] error running [docker network inspect docker-flags-139514]: docker network inspect docker-flags-139514: exit status 1
stdout:
[]
stderr:
Error response from daemon: network docker-flags-139514 not found
I1229 07:32:45.041960 960427 network_create.go:289] output of [docker network inspect docker-flags-139514]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network docker-flags-139514 not found
** /stderr **
I1229 07:32:45.042085 960427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:32:45.066171 960427 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
I1229 07:32:45.066628 960427 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
I1229 07:32:45.067089 960427 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
I1229 07:32:45.067744 960427 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3f050}
I1229 07:32:45.067849 960427 network_create.go:124] attempt to create docker network docker-flags-139514 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1229 07:32:45.067931 960427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-139514 docker-flags-139514
I1229 07:32:45.171086 960427 network_create.go:108] docker network docker-flags-139514 192.168.76.0/24 created
I1229 07:32:45.171205 960427 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-139514" container
I1229 07:32:45.171366 960427 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1229 07:32:45.195384 960427 cli_runner.go:164] Run: docker volume create docker-flags-139514 --label name.minikube.sigs.k8s.io=docker-flags-139514 --label created_by.minikube.sigs.k8s.io=true
I1229 07:32:45.246652 960427 oci.go:103] Successfully created a docker volume docker-flags-139514
I1229 07:32:45.246758 960427 cli_runner.go:164] Run: docker run --rm --name docker-flags-139514-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-139514 --entrypoint /usr/bin/test -v docker-flags-139514:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
I1229 07:32:45.758171 960427 oci.go:107] Successfully prepared a docker volume docker-flags-139514
I1229 07:32:45.758248 960427 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 07:32:45.758263 960427 kic.go:194] Starting extracting preloaded images to volume ...
I1229 07:32:45.758336 960427 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-139514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
I1229 07:32:49.061698 960427 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-139514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.303321006s)
I1229 07:32:49.061731 960427 kic.go:203] duration metric: took 3.303465593s to extract preloaded images to volume ...
W1229 07:32:49.061878 960427 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1229 07:32:49.061996 960427 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1229 07:32:49.123610 960427 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-139514 --name docker-flags-139514 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-139514 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-139514 --network docker-flags-139514 --ip 192.168.76.2 --volume docker-flags-139514:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
I1229 07:32:49.433568 960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Running}}
I1229 07:32:49.454050 960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
I1229 07:32:49.473339 960427 cli_runner.go:164] Run: docker exec docker-flags-139514 stat /var/lib/dpkg/alternatives/iptables
I1229 07:32:49.524175 960427 oci.go:144] the created container "docker-flags-139514" has a running status.
I1229 07:32:49.524202 960427 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa...
I1229 07:32:49.618990 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1229 07:32:49.619078 960427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1229 07:32:49.642637 960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
I1229 07:32:49.662599 960427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1229 07:32:49.662623 960427 kic_runner.go:114] Args: [docker exec --privileged docker-flags-139514 chown docker:docker /home/docker/.ssh/authorized_keys]
I1229 07:32:49.718041 960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
I1229 07:32:49.750101 960427 machine.go:94] provisionDockerMachine start ...
I1229 07:32:49.750208 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:54.115439 949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1229 07:32:54.115477 949749 kubeadm.go:319]
I1229 07:32:54.115596 949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1229 07:32:54.120837 949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:32:54.120898 949749 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:32:54.120992 949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:32:54.121051 949749 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:32:54.121090 949749 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:32:54.121140 949749 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:32:54.121192 949749 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:32:54.121243 949749 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:32:54.121296 949749 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:32:54.121348 949749 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:32:54.121401 949749 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:32:54.121451 949749 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:32:54.121504 949749 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:32:54.121554 949749 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:32:54.121630 949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:32:54.121728 949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:32:54.121822 949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:32:54.121888 949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:32:54.125605 949749 out.go:252] - Generating certificates and keys ...
I1229 07:32:54.125713 949749 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:32:54.125788 949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:32:54.125933 949749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1229 07:32:54.126020 949749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1229 07:32:54.126096 949749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1229 07:32:54.126205 949749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1229 07:32:54.126299 949749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1229 07:32:54.126381 949749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1229 07:32:54.126493 949749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1229 07:32:54.126610 949749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1229 07:32:54.126674 949749 kubeadm.go:319] [certs] Using the existing "sa" key
I1229 07:32:54.126770 949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:32:54.126842 949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:32:54.126914 949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:32:54.126977 949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:32:54.127061 949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:32:54.127149 949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:32:54.127304 949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:32:54.127382 949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:32:54.130556 949749 out.go:252] - Booting up control plane ...
I1229 07:32:54.130667 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:32:54.130752 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:32:54.130843 949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:32:54.131025 949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:32:54.131132 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:32:54.131248 949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:32:54.131361 949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:32:54.131431 949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:32:54.131608 949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:32:54.131770 949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:32:54.131847 949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000601427s
I1229 07:32:54.131856 949749 kubeadm.go:319]
I1229 07:32:54.131930 949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:32:54.131984 949749 kubeadm.go:319] - The kubelet is not running
I1229 07:32:54.132174 949749 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:32:54.132203 949749 kubeadm.go:319]
I1229 07:32:54.132356 949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:32:54.132424 949749 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:32:54.132476 949749 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:32:54.132539 949749 kubeadm.go:319]
I1229 07:32:54.132577 949749 kubeadm.go:403] duration metric: took 8m6.655800799s to StartCluster
I1229 07:32:54.132629 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1229 07:32:54.132713 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1229 07:32:54.187548 949749 cri.go:96] found id: ""
I1229 07:32:54.187637 949749 logs.go:282] 0 containers: []
W1229 07:32:54.187660 949749 logs.go:284] No container was found matching "kube-apiserver"
I1229 07:32:54.187700 949749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1229 07:32:54.187803 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1229 07:32:54.217390 949749 cri.go:96] found id: ""
I1229 07:32:54.217455 949749 logs.go:282] 0 containers: []
W1229 07:32:54.217487 949749 logs.go:284] No container was found matching "etcd"
I1229 07:32:54.217507 949749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1229 07:32:54.217596 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1229 07:32:54.246434 949749 cri.go:96] found id: ""
I1229 07:32:54.246518 949749 logs.go:282] 0 containers: []
W1229 07:32:54.246541 949749 logs.go:284] No container was found matching "coredns"
I1229 07:32:54.246561 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1229 07:32:54.246672 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1229 07:32:54.279822 949749 cri.go:96] found id: ""
I1229 07:32:54.279884 949749 logs.go:282] 0 containers: []
W1229 07:32:54.279914 949749 logs.go:284] No container was found matching "kube-scheduler"
I1229 07:32:54.279933 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1229 07:32:54.280019 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1229 07:32:54.308671 949749 cri.go:96] found id: ""
I1229 07:32:54.308750 949749 logs.go:282] 0 containers: []
W1229 07:32:54.308773 949749 logs.go:284] No container was found matching "kube-proxy"
I1229 07:32:54.308795 949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1229 07:32:54.308901 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1229 07:32:54.350922 949749 cri.go:96] found id: ""
I1229 07:32:54.350991 949749 logs.go:282] 0 containers: []
W1229 07:32:54.351031 949749 logs.go:284] No container was found matching "kube-controller-manager"
I1229 07:32:54.351058 949749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I1229 07:32:54.351143 949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1229 07:32:54.397671 949749 cri.go:96] found id: ""
I1229 07:32:54.397733 949749 logs.go:282] 0 containers: []
W1229 07:32:54.397771 949749 logs.go:284] No container was found matching "kindnet"
I1229 07:32:54.397801 949749 logs.go:123] Gathering logs for container status ...
I1229 07:32:54.397849 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1229 07:32:54.472421 949749 logs.go:123] Gathering logs for kubelet ...
I1229 07:32:54.472498 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1229 07:32:54.546509 949749 logs.go:123] Gathering logs for dmesg ...
I1229 07:32:54.546588 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1229 07:32:54.562327 949749 logs.go:123] Gathering logs for describe nodes ...
I1229 07:32:54.562351 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1229 07:32:54.652514 949749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:32:54.644694 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.645425 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.646973 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.647305 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.648732 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1229 07:32:54.644694 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.645425 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.646973 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.647305 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:54.648732 5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1229 07:32:54.652533 949749 logs.go:123] Gathering logs for Docker ...
I1229 07:32:54.652544 949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W1229 07:32:54.684054 949749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.684224 949749 out.go:285] *
W1229 07:32:54.684342 949749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.684422 949749 out.go:285] *
W1229 07:32:54.684712 949749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1229 07:32:54.690679 949749 out.go:203]
W1229 07:32:54.697342 949749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000601427s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1229 07:32:54.697393 949749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1229 07:32:54.697418 949749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1229 07:32:54.700360 949749 out.go:203]
I1229 07:32:49.777926 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:49.778263 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:49.778284 960427 main.go:144] libmachine: About to run SSH command:
hostname
I1229 07:32:49.778947 960427 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57796->127.0.0.1:33767: read: connection reset by peer
I1229 07:32:52.935920 960427 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-139514
I1229 07:32:52.936023 960427 ubuntu.go:182] provisioning hostname "docker-flags-139514"
I1229 07:32:52.936149 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:52.956245 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:52.956561 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:52.956573 960427 main.go:144] libmachine: About to run SSH command:
sudo hostname docker-flags-139514 && echo "docker-flags-139514" | sudo tee /etc/hostname
I1229 07:32:53.121598 960427 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-139514
I1229 07:32:53.121714 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:53.139000 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:53.139310 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:53.139332 960427 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sdocker-flags-139514' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-139514/g' /etc/hosts;
else
echo '127.0.1.1 docker-flags-139514' | sudo tee -a /etc/hosts;
fi
fi
I1229 07:32:53.288265 960427 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1229 07:32:53.288294 960427 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
I1229 07:32:53.288324 960427 ubuntu.go:190] setting up certificates
I1229 07:32:53.288332 960427 provision.go:84] configureAuth start
I1229 07:32:53.288391 960427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-139514
I1229 07:32:53.306771 960427 provision.go:143] copyHostCerts
I1229 07:32:53.306813 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
I1229 07:32:53.306846 960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
I1229 07:32:53.306852 960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
I1229 07:32:53.306927 960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
I1229 07:32:53.307010 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
I1229 07:32:53.307027 960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
I1229 07:32:53.307031 960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
I1229 07:32:53.307056 960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
I1229 07:32:53.307109 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
I1229 07:32:53.307124 960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
I1229 07:32:53.307128 960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
I1229 07:32:53.307151 960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
I1229 07:32:53.307247 960427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.docker-flags-139514 san=[127.0.0.1 192.168.76.2 docker-flags-139514 localhost minikube]
I1229 07:32:53.539981 960427 provision.go:177] copyRemoteCerts
I1229 07:32:53.540058 960427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1229 07:32:53.540100 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:53.559039 960427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa Username:docker}
I1229 07:32:53.664881 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
I1229 07:32:53.664935 960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1229 07:32:53.684088 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1229 07:32:53.684159 960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1229 07:32:53.703237 960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1229 07:32:53.703311 960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1229 07:32:53.720953 960427 provision.go:87] duration metric: took 432.606921ms to configureAuth
I1229 07:32:53.720982 960427 ubuntu.go:206] setting minikube options for container-runtime
I1229 07:32:53.721179 960427 config.go:182] Loaded profile config "docker-flags-139514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 07:32:53.721236 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:53.738495 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:53.738807 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:53.738823 960427 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1229 07:32:53.892861 960427 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I1229 07:32:53.892884 960427 ubuntu.go:71] root file system type: overlay
I1229 07:32:53.893003 960427 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1229 07:32:53.893076 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:53.911085 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:53.911405 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:53.911494 960427 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment="FOO=BAR"
Environment="BAZ=BAT"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1229 07:32:54.074542 960427 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment=FOO=BAR
Environment=BAZ=BAT
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1229 07:32:54.074627 960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
I1229 07:32:54.092272 960427 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:54.092591 960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33767 <nil> <nil>}
I1229 07:32:54.092617 960427 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
==> Docker <==
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.740692368Z" level=info msg="Restoring containers: start."
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.760577327Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.780620402Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.958396126Z" level=info msg="Loading containers: done."
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969350611Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969416940Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969457808Z" level=info msg="Initializing buildkit"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.989268915Z" level=info msg="Completed buildkit initialization"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994540173Z" level=info msg="Daemon has completed initialization"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994639945Z" level=info msg="API listen on /run/docker.sock"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994713280Z" level=info msg="API listen on /var/run/docker.sock"
Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994836535Z" level=info msg="API listen on [::]:2376"
Dec 29 07:24:44 force-systemd-flag-136540 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 29 07:24:45 force-systemd-flag-136540 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Start docker client with request timeout 0s"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Loaded network plugin cni"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Docker cri networking managed by network plugin cni"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Setting cgroupDriver systemd"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Start cri-dockerd grpc backend"
Dec 29 07:24:45 force-systemd-flag-136540 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:32:56.418933 5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:56.419911 5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:56.421860 5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:56.422503 5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:32:56.424768 5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec29 06:14] hrtimer: interrupt took 41514710 ns
[Dec29 06:33] kauditd_printk_skb: 8 callbacks suppressed
[Dec29 06:45] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
07:32:56 up 4:15, 0 user, load average: 0.55, 0.81, 1.73
Linux force-systemd-flag-136540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 29 07:32:52 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:53 force-systemd-flag-136540 kubelet[5437]: E1229 07:32:53.697259 5437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:54 force-systemd-flag-136540 kubelet[5505]: E1229 07:32:54.458505 5505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:55 force-systemd-flag-136540 kubelet[5542]: E1229 07:32:55.239966 5542 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:32:55 force-systemd-flag-136540 kubelet[5582]: E1229 07:32:55.985819 5582 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-136540 -n force-systemd-flag-136540
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-136540 -n force-systemd-flag-136540: exit status 6 (485.116757ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1229 07:32:57.011227 962640 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-136540" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-136540" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-136540" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-136540
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-136540: (1.947086817s)
--- FAIL: TestForceSystemdFlag (507.17s)