=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E0111 08:03:14.555279 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:04:20.375226 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.800479 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.805934 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.816268 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.836568 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.877111 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.957460 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:03.117900 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:03.438632 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:04.079653 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:05.360123 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:07.920343 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:13.040880 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:23.282033 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:43.762995 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:06:17.323020 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:06:24.723881 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:07:46.644142 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:08:14.554946 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:10:02.801703 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:10:30.486653 278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m22.906915361s)
-- stdout --
* [force-systemd-flag-176470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22402
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-176470" primary control-plane node in "force-systemd-flag-176470" cluster
* Pulling base image v0.0.48-1768032998-22402 ...
-- /stdout --
** stderr **
I0111 08:02:55.219760 510536 out.go:360] Setting OutFile to fd 1 ...
I0111 08:02:55.219965 510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:02:55.219993 510536 out.go:374] Setting ErrFile to fd 2...
I0111 08:02:55.220012 510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:02:55.220685 510536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 08:02:55.221284 510536 out.go:368] Setting JSON to false
I0111 08:02:55.222163 510536 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9925,"bootTime":1768108650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0111 08:02:55.222344 510536 start.go:143] virtualization:
I0111 08:02:55.225197 510536 out.go:179] * [force-systemd-flag-176470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0111 08:02:55.227752 510536 out.go:179] - MINIKUBE_LOCATION=22402
I0111 08:02:55.227908 510536 notify.go:221] Checking for updates...
I0111 08:02:55.233637 510536 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0111 08:02:55.236621 510536 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
I0111 08:02:55.239599 510536 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
I0111 08:02:55.242477 510536 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0111 08:02:55.245433 510536 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0111 08:02:55.248887 510536 config.go:182] Loaded profile config "force-systemd-env-081796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 08:02:55.249012 510536 driver.go:422] Setting default libvirt URI to qemu:///system
I0111 08:02:55.278955 510536 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0111 08:02:55.279151 510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:02:55.340253 510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.330464883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:02:55.340364 510536 docker.go:319] overlay module found
I0111 08:02:55.343549 510536 out.go:179] * Using the docker driver based on user configuration
I0111 08:02:55.346477 510536 start.go:309] selected driver: docker
I0111 08:02:55.346500 510536 start.go:928] validating driver "docker" against <nil>
I0111 08:02:55.346516 510536 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0111 08:02:55.347367 510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:02:55.398049 510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.38897404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:02:55.398208 510536 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0111 08:02:55.398435 510536 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I0111 08:02:55.401352 510536 out.go:179] * Using Docker driver with root privileges
I0111 08:02:55.404169 510536 cni.go:84] Creating CNI manager for ""
I0111 08:02:55.404240 510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0111 08:02:55.404253 510536 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0111 08:02:55.404339 510536 start.go:353] cluster config:
{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:02:55.407425 510536 out.go:179] * Starting "force-systemd-flag-176470" primary control-plane node in "force-systemd-flag-176470" cluster
I0111 08:02:55.410185 510536 cache.go:134] Beginning downloading kic base image for docker with docker
I0111 08:02:55.413170 510536 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
I0111 08:02:55.415999 510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:02:55.416053 510536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I0111 08:02:55.416067 510536 cache.go:65] Caching tarball of preloaded images
I0111 08:02:55.416071 510536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
I0111 08:02:55.416162 510536 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0111 08:02:55.416173 510536 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I0111 08:02:55.416278 510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
I0111 08:02:55.416296 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json: {Name:mkca1c7e6f1f75138479137408eba180dfbb6698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:02:55.436232 510536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
I0111 08:02:55.436255 510536 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
I0111 08:02:55.436276 510536 cache.go:243] Successfully downloaded all kic artifacts
I0111 08:02:55.436313 510536 start.go:360] acquireMachinesLock for force-systemd-flag-176470: {Name:mk069654716209309832bc30167c071b9142dd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0111 08:02:55.436420 510536 start.go:364] duration metric: took 86.972µs to acquireMachinesLock for "force-systemd-flag-176470"
I0111 08:02:55.436450 510536 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0111 08:02:55.436517 510536 start.go:125] createHost starting for "" (driver="docker")
I0111 08:02:55.440079 510536 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0111 08:02:55.440330 510536 start.go:159] libmachine.API.Create for "force-systemd-flag-176470" (driver="docker")
I0111 08:02:55.440371 510536 client.go:173] LocalClient.Create starting
I0111 08:02:55.440473 510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
I0111 08:02:55.440510 510536 main.go:144] libmachine: Decoding PEM data...
I0111 08:02:55.440529 510536 main.go:144] libmachine: Parsing certificate...
I0111 08:02:55.440585 510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
I0111 08:02:55.440606 510536 main.go:144] libmachine: Decoding PEM data...
I0111 08:02:55.440635 510536 main.go:144] libmachine: Parsing certificate...
I0111 08:02:55.441019 510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 08:02:55.456590 510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 08:02:55.456686 510536 network_create.go:284] running [docker network inspect force-systemd-flag-176470] to gather additional debugging logs...
I0111 08:02:55.456707 510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470
W0111 08:02:55.472891 510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 returned with exit code 1
I0111 08:02:55.472925 510536 network_create.go:287] error running [docker network inspect force-systemd-flag-176470]: docker network inspect force-systemd-flag-176470: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-176470 not found
I0111 08:02:55.472944 510536 network_create.go:289] output of [docker network inspect force-systemd-flag-176470]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-176470 not found
** /stderr **
I0111 08:02:55.473054 510536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:02:55.489682 510536 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
I0111 08:02:55.490078 510536 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
I0111 08:02:55.490313 510536 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
I0111 08:02:55.490763 510536 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16310}
I0111 08:02:55.490793 510536 network_create.go:124] attempt to create docker network force-systemd-flag-176470 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0111 08:02:55.490879 510536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-176470 force-systemd-flag-176470
I0111 08:02:55.555925 510536 network_create.go:108] docker network force-systemd-flag-176470 192.168.76.0/24 created
I0111 08:02:55.555959 510536 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-176470" container
I0111 08:02:55.556048 510536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0111 08:02:55.573066 510536 cli_runner.go:164] Run: docker volume create force-systemd-flag-176470 --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true
I0111 08:02:55.592089 510536 oci.go:103] Successfully created a docker volume force-systemd-flag-176470
I0111 08:02:55.592203 510536 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-176470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --entrypoint /usr/bin/test -v force-systemd-flag-176470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
I0111 08:02:56.131269 510536 oci.go:107] Successfully prepared a docker volume force-systemd-flag-176470
I0111 08:02:56.131324 510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:02:56.131342 510536 kic.go:194] Starting extracting preloaded images to volume ...
I0111 08:02:56.131410 510536 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
I0111 08:02:59.422056 510536 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.290602576s)
I0111 08:02:59.422088 510536 kic.go:203] duration metric: took 3.290742215s to extract preloaded images to volume ...
W0111 08:02:59.422241 510536 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0111 08:02:59.422362 510536 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0111 08:02:59.471640 510536 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-176470 --name force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-176470 --network force-systemd-flag-176470 --ip 192.168.76.2 --volume force-systemd-flag-176470:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
I0111 08:02:59.799854 510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Running}}
I0111 08:02:59.823185 510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
I0111 08:02:59.842792 510536 cli_runner.go:164] Run: docker exec force-systemd-flag-176470 stat /var/lib/dpkg/alternatives/iptables
I0111 08:02:59.903035 510536 oci.go:144] the created container "force-systemd-flag-176470" has a running status.
I0111 08:02:59.903064 510536 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa...
I0111 08:03:00.642486 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0111 08:03:00.642605 510536 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0111 08:03:00.666293 510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
I0111 08:03:00.685140 510536 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0111 08:03:00.685163 510536 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-176470 chown docker:docker /home/docker/.ssh/authorized_keys]
I0111 08:03:00.728771 510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
I0111 08:03:00.747446 510536 machine.go:94] provisionDockerMachine start ...
I0111 08:03:00.747552 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:00.765376 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:00.765734 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:00.765754 510536 main.go:144] libmachine: About to run SSH command:
hostname
I0111 08:03:00.766557 510536 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0111 08:03:03.914487 510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
I0111 08:03:03.914512 510536 ubuntu.go:182] provisioning hostname "force-systemd-flag-176470"
I0111 08:03:03.914586 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:03.932237 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:03.932556 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:03.932573 510536 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-176470 && echo "force-systemd-flag-176470" | sudo tee /etc/hostname
I0111 08:03:04.105837 510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
I0111 08:03:04.105961 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:04.127215 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:04.127623 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:04.127644 510536 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-176470' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-176470/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-176470' | sudo tee -a /etc/hosts;
fi
fi
I0111 08:03:04.279132 510536 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0111 08:03:04.279202 510536 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
I0111 08:03:04.279237 510536 ubuntu.go:190] setting up certificates
I0111 08:03:04.279260 510536 provision.go:84] configureAuth start
I0111 08:03:04.279342 510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
I0111 08:03:04.297243 510536 provision.go:143] copyHostCerts
I0111 08:03:04.297285 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
I0111 08:03:04.297322 510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
I0111 08:03:04.297328 510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
I0111 08:03:04.297407 510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
I0111 08:03:04.297482 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
I0111 08:03:04.297498 510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
I0111 08:03:04.297502 510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
I0111 08:03:04.297526 510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
I0111 08:03:04.297563 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
I0111 08:03:04.297578 510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
I0111 08:03:04.297583 510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
I0111 08:03:04.297605 510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
I0111 08:03:04.297646 510536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-176470 san=[127.0.0.1 192.168.76.2 force-systemd-flag-176470 localhost minikube]
I0111 08:03:04.676341 510536 provision.go:177] copyRemoteCerts
I0111 08:03:04.676407 510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0111 08:03:04.676452 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:04.695533 510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
I0111 08:03:04.802703 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0111 08:03:04.802763 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0111 08:03:04.821902 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
I0111 08:03:04.821976 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0111 08:03:04.840427 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0111 08:03:04.840528 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0111 08:03:04.858272 510536 provision.go:87] duration metric: took 578.972579ms to configureAuth
I0111 08:03:04.858355 510536 ubuntu.go:206] setting minikube options for container-runtime
I0111 08:03:04.858554 510536 config.go:182] Loaded profile config "force-systemd-flag-176470": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 08:03:04.858617 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:04.880754 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:04.881061 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:04.881071 510536 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0111 08:03:05.036241 510536 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I0111 08:03:05.036263 510536 ubuntu.go:71] root file system type: overlay
I0111 08:03:05.036379 510536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0111 08:03:05.036456 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:05.055990 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:05.056308 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:05.056396 510536 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0111 08:03:05.217159 510536 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I0111 08:03:05.217244 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:05.236377 510536 main.go:144] libmachine: Using SSH client type: native
I0111 08:03:05.236706 510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33365 <nil> <nil>}
I0111 08:03:05.236730 510536 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0111 08:03:06.213777 510536 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2026-01-08 19:56:21.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2026-01-11 08:03:05.213214607 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0111 08:03:06.213812 510536 machine.go:97] duration metric: took 5.46634075s to provisionDockerMachine
I0111 08:03:06.213825 510536 client.go:176] duration metric: took 10.773442328s to LocalClient.Create
I0111 08:03:06.213873 510536 start.go:167] duration metric: took 10.773542862s to libmachine.API.Create "force-systemd-flag-176470"
I0111 08:03:06.213889 510536 start.go:293] postStartSetup for "force-systemd-flag-176470" (driver="docker")
I0111 08:03:06.213900 510536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0111 08:03:06.213976 510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0111 08:03:06.214038 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:06.233489 510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
I0111 08:03:06.338958 510536 ssh_runner.go:195] Run: cat /etc/os-release
I0111 08:03:06.342424 510536 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0111 08:03:06.342452 510536 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0111 08:03:06.342463 510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
I0111 08:03:06.342538 510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
I0111 08:03:06.342671 510536 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
I0111 08:03:06.342685 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
I0111 08:03:06.342793 510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0111 08:03:06.351211 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
I0111 08:03:06.369285 510536 start.go:296] duration metric: took 155.381043ms for postStartSetup
I0111 08:03:06.369638 510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
I0111 08:03:06.399155 510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
I0111 08:03:06.399451 510536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0111 08:03:06.399491 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:06.417476 510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
I0111 08:03:06.520083 510536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0111 08:03:06.524815 510536 start.go:128] duration metric: took 11.088280156s to createHost
I0111 08:03:06.524841 510536 start.go:83] releasing machines lock for "force-systemd-flag-176470", held for 11.088407356s
I0111 08:03:06.524937 510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
I0111 08:03:06.541461 510536 ssh_runner.go:195] Run: cat /version.json
I0111 08:03:06.541495 510536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0111 08:03:06.541521 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:06.541568 510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
I0111 08:03:06.561814 510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
I0111 08:03:06.578227 510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
I0111 08:03:06.765197 510536 ssh_runner.go:195] Run: systemctl --version
I0111 08:03:06.771777 510536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0111 08:03:06.776029 510536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0111 08:03:06.776122 510536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0111 08:03:06.804486 510536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0111 08:03:06.804565 510536 start.go:496] detecting cgroup driver to use...
I0111 08:03:06.804592 510536 start.go:500] using "systemd" cgroup driver as enforced via flags
I0111 08:03:06.804767 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:03:06.818674 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0111 08:03:06.828002 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0111 08:03:06.837067 510536 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I0111 08:03:06.837138 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0111 08:03:06.845964 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:03:06.855049 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0111 08:03:06.863676 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:03:06.872497 510536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0111 08:03:06.880973 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0111 08:03:06.890121 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0111 08:03:06.899090 510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0111 08:03:06.908147 510536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0111 08:03:06.915960 510536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0111 08:03:06.923607 510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:03:07.033909 510536 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0111 08:03:07.138594 510536 start.go:496] detecting cgroup driver to use...
I0111 08:03:07.138622 510536 start.go:500] using "systemd" cgroup driver as enforced via flags
I0111 08:03:07.138676 510536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0111 08:03:07.154245 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0111 08:03:07.172345 510536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0111 08:03:07.221655 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0111 08:03:07.234818 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0111 08:03:07.247793 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:03:07.261501 510536 ssh_runner.go:195] Run: which cri-dockerd
I0111 08:03:07.264985 510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0111 08:03:07.272438 510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I0111 08:03:07.284695 510536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0111 08:03:07.404970 510536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0111 08:03:07.524732 510536 docker.go:578] configuring docker to use "systemd" as cgroup driver...
I0111 08:03:07.524836 510536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I0111 08:03:07.537550 510536 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I0111 08:03:07.550391 510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:03:07.666047 510536 ssh_runner.go:195] Run: sudo systemctl restart docker
I0111 08:03:08.113136 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0111 08:03:08.126395 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0111 08:03:08.140492 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0111 08:03:08.154283 510536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0111 08:03:08.276152 510536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0111 08:03:08.399843 510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:03:08.519920 510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0111 08:03:08.535880 510536 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I0111 08:03:08.548954 510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:03:08.674253 510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0111 08:03:08.750422 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0111 08:03:08.764674 510536 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0111 08:03:08.764745 510536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0111 08:03:08.770190 510536 start.go:574] Will wait 60s for crictl version
I0111 08:03:08.770257 510536 ssh_runner.go:195] Run: which crictl
I0111 08:03:08.773920 510536 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0111 08:03:08.803610 510536 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.4
RuntimeApiVersion: v1
I0111 08:03:08.803693 510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0111 08:03:08.828423 510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0111 08:03:08.856514 510536 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
I0111 08:03:08.856630 510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:03:08.871466 510536 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0111 08:03:08.876325 510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:03:08.886543 510536 kubeadm.go:884] updating cluster {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0111 08:03:08.886659 510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:03:08.886724 510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0111 08:03:08.904762 510536 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0111 08:03:08.904783 510536 docker.go:624] Images already preloaded, skipping extraction
I0111 08:03:08.904854 510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0111 08:03:08.922241 510536 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0111 08:03:08.922264 510536 cache_images.go:86] Images are preloaded, skipping loading
I0111 08:03:08.922278 510536 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
I0111 08:03:08.922378 510536 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-176470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0111 08:03:08.922440 510536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0111 08:03:08.974284 510536 cni.go:84] Creating CNI manager for ""
I0111 08:03:08.974315 510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0111 08:03:08.974351 510536 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0111 08:03:08.974374 510536 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-176470 NodeName:force-systemd-flag-176470 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0111 08:03:08.974535 510536 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "force-systemd-flag-176470"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0111 08:03:08.974611 510536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0111 08:03:08.983370 510536 binaries.go:51] Found k8s binaries, skipping transfer
I0111 08:03:08.983451 510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0111 08:03:08.991625 510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I0111 08:03:09.006053 510536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0111 08:03:09.021805 510536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
I0111 08:03:09.035826 510536 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0111 08:03:09.039822 510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:03:09.049986 510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:03:09.169621 510536 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0111 08:03:09.185723 510536 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470 for IP: 192.168.76.2
I0111 08:03:09.185747 510536 certs.go:195] generating shared ca certs ...
I0111 08:03:09.185764 510536 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:09.185898 510536 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
I0111 08:03:09.185958 510536 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
I0111 08:03:09.185971 510536 certs.go:257] generating profile certs ...
I0111 08:03:09.186038 510536 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key
I0111 08:03:09.186055 510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt with IP's: []
I0111 08:03:09.419531 510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt ...
I0111 08:03:09.419571 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt: {Name:mk9418e58d3186bffe31b727378fd0d08defb8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:09.419773 510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key ...
I0111 08:03:09.419788 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key: {Name:mk349358a2ff97e24a0ee5565acc755705e64bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:09.419881 510536 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861
I0111 08:03:09.419901 510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0111 08:03:09.847845 510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 ...
I0111 08:03:09.847876 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861: {Name:mk412a98969fba1e6fc51a9a93b9bc1d873d6a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:09.848059 510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 ...
I0111 08:03:09.848075 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861: {Name:mkc93f76b0581a0b9e089b7481afceecd0c3c04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:09.848163 510536 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt
I0111 08:03:09.848240 510536 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key
I0111 08:03:09.848303 510536 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key
I0111 08:03:09.848323 510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt with IP's: []
I0111 08:03:10.141613 510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt ...
I0111 08:03:10.141647 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt: {Name:mk6dace60bb0b0492d37d0756683e679aa0ab1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:10.141875 510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key ...
I0111 08:03:10.141891 510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key: {Name:mk0b0880a2b49969d86a957c1c38bf80a6fa094b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:03:10.141982 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0111 08:03:10.142003 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0111 08:03:10.142022 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0111 08:03:10.142034 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0111 08:03:10.142051 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0111 08:03:10.142068 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0111 08:03:10.142084 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0111 08:03:10.142099 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0111 08:03:10.142154 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
W0111 08:03:10.142196 510536 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
I0111 08:03:10.142209 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
I0111 08:03:10.142241 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
I0111 08:03:10.142272 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
I0111 08:03:10.142300 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
I0111 08:03:10.142362 510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
I0111 08:03:10.142398 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0111 08:03:10.142416 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
I0111 08:03:10.142435 510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
I0111 08:03:10.143004 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0111 08:03:10.162328 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0111 08:03:10.184581 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0111 08:03:10.205364 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0111 08:03:10.225605 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0111 08:03:10.244217 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0111 08:03:10.262318 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0111 08:03:10.280609 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0111 08:03:10.298945 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0111 08:03:10.317711 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
I0111 08:03:10.337232 510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
I0111 08:03:10.355950 510536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0111 08:03:10.369827 510536 ssh_runner.go:195] Run: openssl version
I0111 08:03:10.376367 510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0111 08:03:10.384503 510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0111 08:03:10.392677 510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0111 08:03:10.396870 510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
I0111 08:03:10.396985 510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0111 08:03:10.438118 510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0111 08:03:10.445811 510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0111 08:03:10.453210 510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
I0111 08:03:10.460886 510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
I0111 08:03:10.468116 510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
I0111 08:03:10.472747 510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
I0111 08:03:10.472823 510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
I0111 08:03:10.514049 510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0111 08:03:10.521615 510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
I0111 08:03:10.529704 510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
I0111 08:03:10.537387 510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
I0111 08:03:10.545355 510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
I0111 08:03:10.549343 510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
I0111 08:03:10.549411 510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
I0111 08:03:10.590601 510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0111 08:03:10.598218 510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
I0111 08:03:10.605617 510536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0111 08:03:10.609166 510536 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0111 08:03:10.609220 510536 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:03:10.609341 510536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0111 08:03:10.628830 510536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0111 08:03:10.640206 510536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0111 08:03:10.649415 510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:03:10.649480 510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:03:10.660656 510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:03:10.660677 510536 kubeadm.go:158] found existing configuration files:
I0111 08:03:10.660739 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:03:10.670232 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:03:10.670316 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:03:10.678581 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:03:10.688924 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:03:10.688993 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:03:10.696341 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:03:10.704448 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:03:10.704518 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:03:10.712096 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:03:10.719777 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:03:10.719863 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:03:10.727911 510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:03:10.845766 510536 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:03:10.846305 510536 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:03:10.931360 510536 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:07:15.098226 510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I0111 08:07:15.098261 510536 kubeadm.go:319]
I0111 08:07:15.098395 510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0111 08:07:15.103138 510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:07:15.103229 510536 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:07:15.103392 510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:07:15.103495 510536 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:07:15.103566 510536 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:07:15.103647 510536 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:07:15.103732 510536 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:07:15.103815 510536 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:07:15.103897 510536 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:07:15.103980 510536 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:07:15.104062 510536 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:07:15.104143 510536 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:07:15.104224 510536 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:07:15.104304 510536 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:07:15.104430 510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:07:15.104597 510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:07:15.104755 510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:07:15.104862 510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:07:15.108159 510536 out.go:252] - Generating certificates and keys ...
I0111 08:07:15.108298 510536 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:07:15.108388 510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:07:15.108475 510536 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0111 08:07:15.108582 510536 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0111 08:07:15.108652 510536 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0111 08:07:15.108742 510536 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0111 08:07:15.108832 510536 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0111 08:07:15.108986 510536 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0111 08:07:15.109071 510536 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0111 08:07:15.109237 510536 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0111 08:07:15.109320 510536 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0111 08:07:15.109403 510536 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0111 08:07:15.109483 510536 kubeadm.go:319] [certs] Generating "sa" key and public key
I0111 08:07:15.109555 510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:07:15.109634 510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:07:15.109706 510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:07:15.109788 510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:07:15.109867 510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:07:15.109933 510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:07:15.110023 510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:07:15.110091 510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:07:15.113314 510536 out.go:252] - Booting up control plane ...
I0111 08:07:15.113429 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:07:15.113518 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:07:15.113592 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:07:15.113703 510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:07:15.113801 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:07:15.113911 510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:07:15.114000 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:07:15.114043 510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:07:15.114178 510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:07:15.114288 510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:07:15.114363 510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233896s
I0111 08:07:15.114372 510536 kubeadm.go:319]
I0111 08:07:15.114430 510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:07:15.114467 510536 kubeadm.go:319] - The kubelet is not running
I0111 08:07:15.114576 510536 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:07:15.114584 510536 kubeadm.go:319]
I0111 08:07:15.114691 510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:07:15.114727 510536 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:07:15.114763 510536 kubeadm.go:319] - 'journalctl -xeu kubelet'
W0111 08:07:15.114900 510536 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000233896s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000233896s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I0111 08:07:15.114996 510536 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0111 08:07:15.116109 510536 kubeadm.go:319]
I0111 08:07:15.534570 510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0111 08:07:15.548124 510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:07:15.548189 510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:07:15.556213 510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:07:15.556274 510536 kubeadm.go:158] found existing configuration files:
I0111 08:07:15.556335 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:07:15.563912 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:07:15.563978 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:07:15.571080 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:07:15.578655 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:07:15.578729 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:07:15.586262 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:07:15.593982 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:07:15.594058 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:07:15.601473 510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:07:15.609148 510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:07:15.609220 510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:07:15.616665 510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:07:15.736539 510536 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:07:15.737021 510536 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:07:15.804671 510536 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:11:17.452858 510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0111 08:11:17.452923 510536 kubeadm.go:319]
I0111 08:11:17.453044 510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0111 08:11:17.455493 510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:11:17.455552 510536 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:11:17.455655 510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:11:17.455726 510536 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:11:17.455771 510536 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:11:17.455821 510536 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:11:17.455882 510536 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:11:17.455934 510536 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:11:17.455990 510536 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:11:17.456045 510536 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:11:17.456098 510536 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:11:17.456174 510536 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:11:17.456250 510536 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:11:17.456309 510536 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:11:17.456404 510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:11:17.456555 510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:11:17.456685 510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:11:17.456751 510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:11:17.461728 510536 out.go:252] - Generating certificates and keys ...
I0111 08:11:17.461862 510536 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:11:17.461936 510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:11:17.462020 510536 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0111 08:11:17.462086 510536 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I0111 08:11:17.462160 510536 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I0111 08:11:17.462218 510536 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I0111 08:11:17.462283 510536 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I0111 08:11:17.462345 510536 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I0111 08:11:17.462426 510536 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0111 08:11:17.462501 510536 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0111 08:11:17.462539 510536 kubeadm.go:319] [certs] Using the existing "sa" key
I0111 08:11:17.462595 510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:11:17.462647 510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:11:17.462704 510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:11:17.462757 510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:11:17.462821 510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:11:17.462945 510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:11:17.463059 510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:11:17.463156 510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:11:17.466057 510536 out.go:252] - Booting up control plane ...
I0111 08:11:17.466204 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:11:17.466297 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:11:17.466399 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:11:17.466523 510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:11:17.466625 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:11:17.466736 510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:11:17.466873 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:11:17.466916 510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:11:17.467064 510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:11:17.467174 510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:11:17.467272 510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000843963s
I0111 08:11:17.467285 510536 kubeadm.go:319]
I0111 08:11:17.467355 510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:11:17.467421 510536 kubeadm.go:319] - The kubelet is not running
I0111 08:11:17.467571 510536 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:11:17.467585 510536 kubeadm.go:319]
I0111 08:11:17.467699 510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:11:17.467740 510536 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:11:17.467774 510536 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0111 08:11:17.467797 510536 kubeadm.go:319]
I0111 08:11:17.467843 510536 kubeadm.go:403] duration metric: took 8m6.858627939s to StartCluster
I0111 08:11:17.467883 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0111 08:11:17.467954 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0111 08:11:17.510393 510536 cri.go:96] found id: ""
I0111 08:11:17.510436 510536 logs.go:282] 0 containers: []
W0111 08:11:17.510445 510536 logs.go:284] No container was found matching "kube-apiserver"
I0111 08:11:17.510454 510536 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0111 08:11:17.510520 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0111 08:11:17.540055 510536 cri.go:96] found id: ""
I0111 08:11:17.540090 510536 logs.go:282] 0 containers: []
W0111 08:11:17.540099 510536 logs.go:284] No container was found matching "etcd"
I0111 08:11:17.540106 510536 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0111 08:11:17.540168 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0111 08:11:17.568990 510536 cri.go:96] found id: ""
I0111 08:11:17.569063 510536 logs.go:282] 0 containers: []
W0111 08:11:17.569086 510536 logs.go:284] No container was found matching "coredns"
I0111 08:11:17.569106 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0111 08:11:17.569199 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0111 08:11:17.598546 510536 cri.go:96] found id: ""
I0111 08:11:17.598624 510536 logs.go:282] 0 containers: []
W0111 08:11:17.598647 510536 logs.go:284] No container was found matching "kube-scheduler"
I0111 08:11:17.598667 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0111 08:11:17.598751 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0111 08:11:17.650676 510536 cri.go:96] found id: ""
I0111 08:11:17.650750 510536 logs.go:282] 0 containers: []
W0111 08:11:17.650773 510536 logs.go:284] No container was found matching "kube-proxy"
I0111 08:11:17.650794 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0111 08:11:17.650928 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0111 08:11:17.684396 510536 cri.go:96] found id: ""
I0111 08:11:17.684474 510536 logs.go:282] 0 containers: []
W0111 08:11:17.684505 510536 logs.go:284] No container was found matching "kube-controller-manager"
I0111 08:11:17.684527 510536 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0111 08:11:17.684636 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0111 08:11:17.745829 510536 cri.go:96] found id: ""
I0111 08:11:17.745873 510536 logs.go:282] 0 containers: []
W0111 08:11:17.745883 510536 logs.go:284] No container was found matching "kindnet"
I0111 08:11:17.745892 510536 logs.go:123] Gathering logs for kubelet ...
I0111 08:11:17.745930 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0111 08:11:17.828347 510536 logs.go:123] Gathering logs for dmesg ...
I0111 08:11:17.828383 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0111 08:11:17.850604 510536 logs.go:123] Gathering logs for describe nodes ...
I0111 08:11:17.850630 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0111 08:11:17.973516 510536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:11:17.963026 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.963926 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967222 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967572 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.969066 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0111 08:11:17.963026 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.963926 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967222 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967572 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.969066 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0111 08:11:17.973541 510536 logs.go:123] Gathering logs for Docker ...
I0111 08:11:17.973554 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0111 08:11:18.001046 510536 logs.go:123] Gathering logs for container status ...
I0111 08:11:18.001086 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0111 08:11:18.046288 510536 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.046406 510536 out.go:285] *
*
W0111 08:11:18.046610 510536 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.046721 510536 out.go:285] *
*
W0111 08:11:18.047132 510536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0111 08:11:18.052816 510536 out.go:203]
W0111 08:11:18.055641 510536 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.055919 510536 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0111 08:11:18.055975 510536 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0111 08:11:18.060639 510536 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-176470 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-11 08:11:18.761835576 +0000 UTC m=+2857.878215376
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-176470
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-176470:
-- stdout --
[
{
"Id": "5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49",
"Created": "2026-01-11T08:02:59.486672984Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 510959,
"ExitCode": 0,
"Error": "",
"StartedAt": "2026-01-11T08:02:59.556542956Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
"ResolvConfPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/hostname",
"HostsPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/hosts",
"LogPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49-json.log",
"Name": "/force-systemd-flag-176470",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-176470:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-176470",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49",
"LowerDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3-init/diff:/var/lib/docker/overlay2/e4b3b3f7b2adc33a7ca49c4e0ccdd05f06b3e555556bac3db149fafb744bb371/diff",
"MergedDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/merged",
"UpperDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/diff",
"WorkDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-176470",
"Source": "/var/lib/docker/volumes/force-systemd-flag-176470/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-176470",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-176470",
"name.minikube.sigs.k8s.io": "force-systemd-flag-176470",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "67148d5f8f89e37fb1e1a27a81118c30380e4084641fb450af1f67ff9a1f3fd2",
"SandboxKey": "/var/run/docker/netns/67148d5f8f89",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33365"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33366"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33369"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33367"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33368"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-176470": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "f6:b9:27:f2:b4:f4",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "3b6e1c162656adf1a9d01bfef379c0d2c9e5a5a5e226c14f3fd4ba142242bf34",
"EndpointID": "abcebaa1cf221454fc039923d19aa6e99abd10973f27618451a7214def396a85",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-176470",
"5c184c721f25"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-176470 -n force-systemd-flag-176470
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-176470 -n force-systemd-flag-176470: exit status 6 (501.868109ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0111 08:11:19.262903 524304 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-176470" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-176470 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ -p cilium-195160 sudo cat /usr/lib/systemd/system/cri-docker.service │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo cri-dockerd --version │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo systemctl status containerd --all --full --no-pager │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo systemctl cat containerd --no-pager │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo cat /lib/systemd/system/containerd.service │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo cat /etc/containerd/config.toml │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo containerd config dump │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo systemctl status crio --all --full --no-pager │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo systemctl cat crio --no-pager │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ -p cilium-195160 sudo crio config │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ delete │ -p cilium-195160 │ cilium-195160 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ start │ -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-env-081796 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ delete │ -p NoKubernetes-616586 │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ start │ -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ ssh │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ stop │ -p NoKubernetes-616586 │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ start │ -p NoKubernetes-616586 --driver=docker --container-runtime=docker │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ ssh │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ delete │ -p NoKubernetes-616586 │ NoKubernetes-616586 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
│ start │ -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ force-systemd-flag-176470 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ │
│ ssh │ force-systemd-env-081796 ssh docker info --format {{.CgroupDriver}} │ force-systemd-env-081796 │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ 11 Jan 26 08:10 UTC │
│ delete │ -p force-systemd-env-081796 │ force-systemd-env-081796 │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ 11 Jan 26 08:10 UTC │
│ start │ -p docker-flags-747538 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker │ docker-flags-747538 │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ │
│ ssh │ force-systemd-flag-176470 ssh docker info --format {{.CgroupDriver}} │ force-systemd-flag-176470 │ jenkins │ v1.37.0 │ 11 Jan 26 08:11 UTC │ 11 Jan 26 08:11 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2026/01/11 08:10:49
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0111 08:10:49.343746 520924 out.go:360] Setting OutFile to fd 1 ...
I0111 08:10:49.343869 520924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:10:49.343878 520924 out.go:374] Setting ErrFile to fd 2...
I0111 08:10:49.343883 520924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:10:49.344129 520924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 08:10:49.344599 520924 out.go:368] Setting JSON to false
I0111 08:10:49.345447 520924 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10399,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0111 08:10:49.345514 520924 start.go:143] virtualization:
I0111 08:10:49.349240 520924 out.go:179] * [docker-flags-747538] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0111 08:10:49.353771 520924 out.go:179] - MINIKUBE_LOCATION=22402
I0111 08:10:49.353846 520924 notify.go:221] Checking for updates...
I0111 08:10:49.361233 520924 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0111 08:10:49.364426 520924 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
I0111 08:10:49.367566 520924 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
I0111 08:10:49.370647 520924 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0111 08:10:49.373618 520924 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0111 08:10:49.377147 520924 config.go:182] Loaded profile config "force-systemd-flag-176470": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 08:10:49.377321 520924 driver.go:422] Setting default libvirt URI to qemu:///system
I0111 08:10:49.397955 520924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0111 08:10:49.398077 520924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:10:49.456806 520924 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:10:49.447340882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:10:49.456911 520924 docker.go:319] overlay module found
I0111 08:10:49.460242 520924 out.go:179] * Using the docker driver based on user configuration
I0111 08:10:49.463229 520924 start.go:309] selected driver: docker
I0111 08:10:49.463247 520924 start.go:928] validating driver "docker" against <nil>
I0111 08:10:49.463271 520924 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0111 08:10:49.464023 520924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0111 08:10:49.521667 520924 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:10:49.512130508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0111 08:10:49.521812 520924 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0111 08:10:49.522025 520924 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I0111 08:10:49.525045 520924 out.go:179] * Using Docker driver with root privileges
I0111 08:10:49.527974 520924 cni.go:84] Creating CNI manager for ""
I0111 08:10:49.528063 520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0111 08:10:49.528077 520924 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0111 08:10:49.528167 520924 start.go:353] cluster config:
{Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:10:49.533303 520924 out.go:179] * Starting "docker-flags-747538" primary control-plane node in "docker-flags-747538" cluster
I0111 08:10:49.536247 520924 cache.go:134] Beginning downloading kic base image for docker with docker
I0111 08:10:49.539333 520924 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
I0111 08:10:49.542210 520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:10:49.542260 520924 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
I0111 08:10:49.542272 520924 cache.go:65] Caching tarball of preloaded images
I0111 08:10:49.542271 520924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
I0111 08:10:49.542384 520924 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0111 08:10:49.542395 520924 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
I0111 08:10:49.542534 520924 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json ...
I0111 08:10:49.542567 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json: {Name:mkddbc35a2b6012f37ba90ab45436ce25557e0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:10:49.560693 520924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
I0111 08:10:49.560716 520924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
I0111 08:10:49.560736 520924 cache.go:243] Successfully downloaded all kic artifacts
I0111 08:10:49.560772 520924 start.go:360] acquireMachinesLock for docker-flags-747538: {Name:mk3014c19513dad4e5876bfc3cf028bc21b9e961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0111 08:10:49.560888 520924 start.go:364] duration metric: took 94.857µs to acquireMachinesLock for "docker-flags-747538"
I0111 08:10:49.560917 520924 start.go:93] Provisioning new machine with config: &{Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0111 08:10:49.560998 520924 start.go:125] createHost starting for "" (driver="docker")
I0111 08:10:49.564432 520924 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0111 08:10:49.564663 520924 start.go:159] libmachine.API.Create for "docker-flags-747538" (driver="docker")
I0111 08:10:49.564701 520924 client.go:173] LocalClient.Create starting
I0111 08:10:49.564798 520924 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
I0111 08:10:49.564837 520924 main.go:144] libmachine: Decoding PEM data...
I0111 08:10:49.564857 520924 main.go:144] libmachine: Parsing certificate...
I0111 08:10:49.564912 520924 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
I0111 08:10:49.564934 520924 main.go:144] libmachine: Decoding PEM data...
I0111 08:10:49.564945 520924 main.go:144] libmachine: Parsing certificate...
I0111 08:10:49.565319 520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 08:10:49.581266 520924 cli_runner.go:211] docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 08:10:49.581354 520924 network_create.go:284] running [docker network inspect docker-flags-747538] to gather additional debugging logs...
I0111 08:10:49.581376 520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538
W0111 08:10:49.597247 520924 cli_runner.go:211] docker network inspect docker-flags-747538 returned with exit code 1
I0111 08:10:49.597292 520924 network_create.go:287] error running [docker network inspect docker-flags-747538]: docker network inspect docker-flags-747538: exit status 1
stdout:
[]
stderr:
Error response from daemon: network docker-flags-747538 not found
I0111 08:10:49.597305 520924 network_create.go:289] output of [docker network inspect docker-flags-747538]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network docker-flags-747538 not found
** /stderr **
I0111 08:10:49.597400 520924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:10:49.614104 520924 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
I0111 08:10:49.614510 520924 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
I0111 08:10:49.614741 520924 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
I0111 08:10:49.615097 520924 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3b6e1c162656 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:94:9f:06:02:24} reservation:<nil>}
I0111 08:10:49.615547 520924 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08cd0}
I0111 08:10:49.615569 520924 network_create.go:124] attempt to create docker network docker-flags-747538 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0111 08:10:49.615625 520924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-747538 docker-flags-747538
I0111 08:10:49.667963 520924 network_create.go:108] docker network docker-flags-747538 192.168.85.0/24 created
I0111 08:10:49.667997 520924 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-747538" container
I0111 08:10:49.668089 520924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0111 08:10:49.688516 520924 cli_runner.go:164] Run: docker volume create docker-flags-747538 --label name.minikube.sigs.k8s.io=docker-flags-747538 --label created_by.minikube.sigs.k8s.io=true
I0111 08:10:49.706687 520924 oci.go:103] Successfully created a docker volume docker-flags-747538
I0111 08:10:49.706780 520924 cli_runner.go:164] Run: docker run --rm --name docker-flags-747538-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-747538 --entrypoint /usr/bin/test -v docker-flags-747538:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
I0111 08:10:50.229477 520924 oci.go:107] Successfully prepared a docker volume docker-flags-747538
I0111 08:10:50.229555 520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:10:50.229566 520924 kic.go:194] Starting extracting preloaded images to volume ...
I0111 08:10:50.229636 520924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-747538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
I0111 08:10:53.570959 520924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-747538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.341275255s)
I0111 08:10:53.570998 520924 kic.go:203] duration metric: took 3.341428014s to extract preloaded images to volume ...
W0111 08:10:53.571151 520924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0111 08:10:53.571262 520924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0111 08:10:53.626189 520924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-747538 --name docker-flags-747538 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-747538 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-747538 --network docker-flags-747538 --ip 192.168.85.2 --volume docker-flags-747538:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
I0111 08:10:53.939121 520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Running}}
I0111 08:10:53.966213 520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
I0111 08:10:53.992131 520924 cli_runner.go:164] Run: docker exec docker-flags-747538 stat /var/lib/dpkg/alternatives/iptables
I0111 08:10:54.053280 520924 oci.go:144] the created container "docker-flags-747538" has a running status.
I0111 08:10:54.053308 520924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa...
I0111 08:10:54.269598 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0111 08:10:54.269707 520924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0111 08:10:54.317630 520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
I0111 08:10:54.344650 520924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0111 08:10:54.344668 520924 kic_runner.go:114] Args: [docker exec --privileged docker-flags-747538 chown docker:docker /home/docker/.ssh/authorized_keys]
I0111 08:10:54.430625 520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
I0111 08:10:54.453352 520924 machine.go:94] provisionDockerMachine start ...
I0111 08:10:54.453451 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:54.488754 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:54.489084 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:54.489098 520924 main.go:144] libmachine: About to run SSH command:
hostname
I0111 08:10:54.489813 520924 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53496->127.0.0.1:33370: read: connection reset by peer
I0111 08:10:57.638319 520924 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-747538
I0111 08:10:57.638342 520924 ubuntu.go:182] provisioning hostname "docker-flags-747538"
I0111 08:10:57.638413 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:57.655990 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:57.656366 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:57.656385 520924 main.go:144] libmachine: About to run SSH command:
sudo hostname docker-flags-747538 && echo "docker-flags-747538" | sudo tee /etc/hostname
I0111 08:10:57.812662 520924 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-747538
I0111 08:10:57.812781 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:57.830892 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:57.831210 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:57.831226 520924 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sdocker-flags-747538' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-747538/g' /etc/hosts;
else
echo '127.0.1.1 docker-flags-747538' | sudo tee -a /etc/hosts;
fi
fi
I0111 08:10:57.979180 520924 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0111 08:10:57.979206 520924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
I0111 08:10:57.979230 520924 ubuntu.go:190] setting up certificates
I0111 08:10:57.979246 520924 provision.go:84] configureAuth start
I0111 08:10:57.979308 520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
I0111 08:10:57.996589 520924 provision.go:143] copyHostCerts
I0111 08:10:57.996636 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
I0111 08:10:57.996669 520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
I0111 08:10:57.996682 520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
I0111 08:10:57.996762 520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
I0111 08:10:57.996856 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
I0111 08:10:57.996877 520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
I0111 08:10:57.996882 520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
I0111 08:10:57.996909 520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
I0111 08:10:57.996962 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
I0111 08:10:57.996980 520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
I0111 08:10:57.996985 520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
I0111 08:10:57.997014 520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
I0111 08:10:57.997074 520924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.docker-flags-747538 san=[127.0.0.1 192.168.85.2 docker-flags-747538 localhost minikube]
I0111 08:10:58.536454 520924 provision.go:177] copyRemoteCerts
I0111 08:10:58.536520 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0111 08:10:58.536558 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:58.553113 520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
I0111 08:10:58.659185 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0111 08:10:58.659270 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0111 08:10:58.676842 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
I0111 08:10:58.676916 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0111 08:10:58.694011 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0111 08:10:58.694072 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0111 08:10:58.711713 520924 provision.go:87] duration metric: took 732.442991ms to configureAuth
I0111 08:10:58.711785 520924 ubuntu.go:206] setting minikube options for container-runtime
I0111 08:10:58.712005 520924 config.go:182] Loaded profile config "docker-flags-747538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 08:10:58.712103 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:58.729751 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:58.730981 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:58.730999 520924 main.go:144] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0111 08:10:58.888341 520924 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
I0111 08:10:58.888364 520924 ubuntu.go:71] root file system type: overlay
I0111 08:10:58.888485 520924 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0111 08:10:58.888568 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:58.910757 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:58.912196 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:58.912292 520924 main.go:144] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment="FOO=BAR"
Environment="BAZ=BAT"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0111 08:10:59.076562 520924 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
Environment=FOO=BAR
Environment=BAZ=BAT
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I0111 08:10:59.076660 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:10:59.094332 520924 main.go:144] libmachine: Using SSH client type: native
I0111 08:10:59.094655 520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33370 <nil> <nil>}
I0111 08:10:59.094672 520924 main.go:144] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0111 08:11:00.397732 520924 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2026-01-08 19:56:21.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2026-01-11 08:10:59.072052587 +0000
@@ -9,23 +9,36 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+Environment=FOO=BAR
+Environment=BAZ=BAT
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0111 08:11:00.397773 520924 machine.go:97] duration metric: took 5.944399319s to provisionDockerMachine
I0111 08:11:00.397786 520924 client.go:176] duration metric: took 10.833073406s to LocalClient.Create
I0111 08:11:00.397813 520924 start.go:167] duration metric: took 10.833150401s to libmachine.API.Create "docker-flags-747538"
I0111 08:11:00.397824 520924 start.go:293] postStartSetup for "docker-flags-747538" (driver="docker")
I0111 08:11:00.397840 520924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0111 08:11:00.397936 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0111 08:11:00.397995 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:11:00.425747 520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
I0111 08:11:00.535501 520924 ssh_runner.go:195] Run: cat /etc/os-release
I0111 08:11:00.540367 520924 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0111 08:11:00.540395 520924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0111 08:11:00.540432 520924 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
I0111 08:11:00.540502 520924 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
I0111 08:11:00.540587 520924 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
I0111 08:11:00.540599 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
I0111 08:11:00.540703 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0111 08:11:00.548481 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
I0111 08:11:00.566435 520924 start.go:296] duration metric: took 168.589908ms for postStartSetup
I0111 08:11:00.566863 520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
I0111 08:11:00.584575 520924 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json ...
I0111 08:11:00.584960 520924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0111 08:11:00.585010 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:11:00.602049 520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
I0111 08:11:00.703790 520924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0111 08:11:00.708694 520924 start.go:128] duration metric: took 11.14768044s to createHost
I0111 08:11:00.708719 520924 start.go:83] releasing machines lock for "docker-flags-747538", held for 11.147818906s
I0111 08:11:00.708791 520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
I0111 08:11:00.725687 520924 ssh_runner.go:195] Run: cat /version.json
I0111 08:11:00.725741 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:11:00.725999 520924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0111 08:11:00.726060 520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
I0111 08:11:00.744272 520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
I0111 08:11:00.752842 520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
I0111 08:11:00.846460 520924 ssh_runner.go:195] Run: systemctl --version
I0111 08:11:00.948121 520924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0111 08:11:00.953117 520924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0111 08:11:00.953202 520924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0111 08:11:00.980681 520924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0111 08:11:00.980757 520924 start.go:496] detecting cgroup driver to use...
I0111 08:11:00.980804 520924 detect.go:175] detected "cgroupfs" cgroup driver on host os
I0111 08:11:00.980954 520924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:11:00.995756 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0111 08:11:01.005670 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0111 08:11:01.015303 520924 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I0111 08:11:01.015382 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0111 08:11:01.024678 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:11:01.033807 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0111 08:11:01.042636 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0111 08:11:01.051691 520924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0111 08:11:01.059768 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0111 08:11:01.068779 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0111 08:11:01.077853 520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0111 08:11:01.086986 520924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0111 08:11:01.094918 520924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0111 08:11:01.103076 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:01.244916 520924 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0111 08:11:01.325753 520924 start.go:496] detecting cgroup driver to use...
I0111 08:11:01.325806 520924 detect.go:175] detected "cgroupfs" cgroup driver on host os
I0111 08:11:01.325859 520924 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0111 08:11:01.341220 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0111 08:11:01.354665 520924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0111 08:11:01.381050 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0111 08:11:01.394034 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0111 08:11:01.407512 520924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0111 08:11:01.422626 520924 ssh_runner.go:195] Run: which cri-dockerd
I0111 08:11:01.426500 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0111 08:11:01.434184 520924 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I0111 08:11:01.447582 520924 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0111 08:11:01.565617 520924 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0111 08:11:01.688439 520924 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
I0111 08:11:01.688573 520924 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0111 08:11:01.703071 520924 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I0111 08:11:01.717759 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:01.841223 520924 ssh_runner.go:195] Run: sudo systemctl restart docker
I0111 08:11:02.331482 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0111 08:11:02.348529 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0111 08:11:02.363545 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0111 08:11:02.379991 520924 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0111 08:11:02.515509 520924 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0111 08:11:02.647213 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:02.769736 520924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0111 08:11:02.785421 520924 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I0111 08:11:02.798179 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:02.912586 520924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0111 08:11:02.978874 520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0111 08:11:02.994139 520924 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0111 08:11:02.994337 520924 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0111 08:11:02.998727 520924 start.go:574] Will wait 60s for crictl version
I0111 08:11:02.998979 520924 ssh_runner.go:195] Run: which crictl
I0111 08:11:03.003497 520924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0111 08:11:03.032157 520924 start.go:590] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.4
RuntimeApiVersion: v1
I0111 08:11:03.032272 520924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0111 08:11:03.055420 520924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0111 08:11:03.083794 520924 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
I0111 08:11:03.086636 520924 out.go:179] - opt debug
I0111 08:11:03.089617 520924 out.go:179] - opt icc=true
I0111 08:11:03.092443 520924 out.go:179] - env FOO=BAR
I0111 08:11:03.095378 520924 out.go:179] - env BAZ=BAT
I0111 08:11:03.098244 520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:11:03.114929 520924 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0111 08:11:03.118888 520924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:11:03.128929 520924 kubeadm.go:884] updating cluster {Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0111 08:11:03.129053 520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 08:11:03.129109 520924 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0111 08:11:03.154307 520924 docker.go:694] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
registry.k8s.io/pause:3.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0111 08:11:03.154330 520924 docker.go:624] Images already preloaded, skipping extraction
I0111 08:11:03.154341 520924 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
I0111 08:11:03.154444 520924 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=docker-flags-747538 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0111 08:11:03.154512 520924 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0111 08:11:03.206688 520924 cni.go:84] Creating CNI manager for ""
I0111 08:11:03.206714 520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0111 08:11:03.206740 520924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0111 08:11:03.206765 520924 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:docker-flags-747538 NodeName:docker-flags-747538 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0111 08:11:03.206922 520924 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "docker-flags-747538"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0111 08:11:03.207034 520924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0111 08:11:03.215311 520924 binaries.go:51] Found k8s binaries, skipping transfer
I0111 08:11:03.215405 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0111 08:11:03.223392 520924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
I0111 08:11:03.236933 520924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0111 08:11:03.249661 520924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
I0111 08:11:03.263019 520924 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0111 08:11:03.266942 520924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0111 08:11:03.277694 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0111 08:11:03.403482 520924 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0111 08:11:03.421836 520924 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538 for IP: 192.168.85.2
I0111 08:11:03.421859 520924 certs.go:195] generating shared ca certs ...
I0111 08:11:03.421875 520924 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:03.422027 520924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
I0111 08:11:03.422080 520924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
I0111 08:11:03.422092 520924 certs.go:257] generating profile certs ...
I0111 08:11:03.422147 520924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key
I0111 08:11:03.422162 520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt with IP's: []
I0111 08:11:03.539758 520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt ...
I0111 08:11:03.539790 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt: {Name:mk270fca02964aa29f311e366014d5733f531228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:03.539993 520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key ...
I0111 08:11:03.540009 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key: {Name:mke5998452e96841a984b78967db750f062a137a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:03.540105 520924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb
I0111 08:11:03.540122 520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0111 08:11:03.941138 520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb ...
I0111 08:11:03.941177 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb: {Name:mk30ca1e7aba90138dc745c3c5f0b7897bf7938f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:03.941382 520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb ...
I0111 08:11:03.941399 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb: {Name:mkfcc2878fce37bf3bf735da13ccc68e9427f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:03.941487 520924 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt
I0111 08:11:03.941572 520924 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key
I0111 08:11:03.941633 520924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key
I0111 08:11:03.941651 520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt with IP's: []
I0111 08:11:04.202994 520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt ...
I0111 08:11:04.203038 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt: {Name:mka203b1581fea2d77db09fdd4dc7dfae878c175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:04.203218 520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key ...
I0111 08:11:04.203235 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key: {Name:mkf9d6bd09f6248861b5dcd70a3546265c71546f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:04.203311 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0111 08:11:04.203340 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0111 08:11:04.203357 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0111 08:11:04.203374 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0111 08:11:04.203385 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0111 08:11:04.203401 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0111 08:11:04.203412 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0111 08:11:04.203428 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0111 08:11:04.203484 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
W0111 08:11:04.203526 520924 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
I0111 08:11:04.203539 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
I0111 08:11:04.203570 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
I0111 08:11:04.203601 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
I0111 08:11:04.203629 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
I0111 08:11:04.203677 520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
I0111 08:11:04.203715 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
I0111 08:11:04.203741 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:04.203757 520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
I0111 08:11:04.204350 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0111 08:11:04.222456 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0111 08:11:04.240610 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0111 08:11:04.258784 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0111 08:11:04.276911 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0111 08:11:04.296921 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0111 08:11:04.315427 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0111 08:11:04.333328 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0111 08:11:04.355116 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
I0111 08:11:04.373870 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0111 08:11:04.391860 520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
I0111 08:11:04.410365 520924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0111 08:11:04.423690 520924 ssh_runner.go:195] Run: openssl version
I0111 08:11:04.430385 520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
I0111 08:11:04.438057 520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
I0111 08:11:04.445923 520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
I0111 08:11:04.450333 520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
I0111 08:11:04.450447 520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
I0111 08:11:04.491674 520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0111 08:11:04.499247 520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
I0111 08:11:04.506933 520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:04.514584 520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0111 08:11:04.522445 520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:04.526347 520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:04.526430 520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0111 08:11:04.567441 520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0111 08:11:04.575040 520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0111 08:11:04.582544 520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
I0111 08:11:04.589968 520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
I0111 08:11:04.597888 520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
I0111 08:11:04.601813 520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
I0111 08:11:04.601881 520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
I0111 08:11:04.643564 520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0111 08:11:04.651200 520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
I0111 08:11:04.658573 520924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0111 08:11:04.662081 520924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0111 08:11:04.662166 520924 kubeadm.go:401] StartCluster: {Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0111 08:11:04.662303 520924 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0111 08:11:04.680219 520924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0111 08:11:04.687967 520924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0111 08:11:04.695818 520924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0111 08:11:04.695886 520924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0111 08:11:04.703848 520924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0111 08:11:04.703871 520924 kubeadm.go:158] found existing configuration files:
I0111 08:11:04.703947 520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0111 08:11:04.711621 520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0111 08:11:04.711694 520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0111 08:11:04.719183 520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0111 08:11:04.726885 520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0111 08:11:04.726993 520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0111 08:11:04.734592 520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0111 08:11:04.742664 520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0111 08:11:04.742761 520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0111 08:11:04.750405 520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0111 08:11:04.758314 520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0111 08:11:04.758397 520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0111 08:11:04.766585 520924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0111 08:11:04.808924 520924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:11:04.809139 520924 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:11:04.886354 520924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:11:04.886440 520924 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:11:04.886480 520924 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:11:04.886547 520924 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:11:04.886600 520924 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:11:04.886651 520924 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:11:04.886703 520924 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:11:04.886755 520924 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:11:04.886805 520924 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:11:04.886891 520924 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:11:04.886943 520924 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:11:04.887000 520924 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:11:04.973460 520924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:11:04.973579 520924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:11:04.973675 520924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:11:04.995257 520924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:11:04.999399 520924 out.go:252] - Generating certificates and keys ...
I0111 08:11:04.999510 520924 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:11:04.999582 520924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:11:05.160232 520924 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0111 08:11:05.498105 520924 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0111 08:11:05.828651 520924 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0111 08:11:06.119780 520924 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0111 08:11:06.615562 520924 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0111 08:11:06.615743 520924 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [docker-flags-747538 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0111 08:11:06.923732 520924 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0111 08:11:06.924188 520924 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [docker-flags-747538 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0111 08:11:07.177014 520924 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0111 08:11:07.304847 520924 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0111 08:11:07.751910 520924 kubeadm.go:319] [certs] Generating "sa" key and public key
I0111 08:11:07.752226 520924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:11:08.130275 520924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:11:08.358093 520924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:11:08.565532 520924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:11:08.889031 520924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:11:09.120333 520924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:11:09.120967 520924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:11:09.124514 520924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:11:09.128211 520924 out.go:252] - Booting up control plane ...
I0111 08:11:09.128321 520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:11:09.128399 520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:11:09.128878 520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:11:09.145487 520924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:11:09.145807 520924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:11:09.154175 520924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:11:09.154480 520924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:11:09.154699 520924 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:11:09.288104 520924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:11:09.288226 520924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:11:11.288730 520924 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000960503s
I0111 08:11:11.292185 520924 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I0111 08:11:11.292281 520924 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
I0111 08:11:11.292370 520924 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I0111 08:11:11.292705 520924 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I0111 08:11:13.307884 520924 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.015247141s
I0111 08:11:15.360506 520924 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.068270586s
I0111 08:11:17.293986 520924 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001534258s
I0111 08:11:17.339991 520924 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0111 08:11:17.359003 520924 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0111 08:11:17.375748 520924 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I0111 08:11:17.375950 520924 kubeadm.go:319] [mark-control-plane] Marking the node docker-flags-747538 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0111 08:11:17.390234 520924 kubeadm.go:319] [bootstrap-token] Using token: bwyv57.9wca10o27ezxy0ff
I0111 08:11:17.452858 510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0111 08:11:17.452923 510536 kubeadm.go:319]
I0111 08:11:17.453044 510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0111 08:11:17.455493 510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0111 08:11:17.455552 510536 kubeadm.go:319] [preflight] Running pre-flight checks
I0111 08:11:17.455655 510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0111 08:11:17.455726 510536 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0111 08:11:17.455771 510536 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0111 08:11:17.455821 510536 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0111 08:11:17.455882 510536 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0111 08:11:17.455934 510536 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0111 08:11:17.455990 510536 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0111 08:11:17.456045 510536 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0111 08:11:17.456098 510536 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0111 08:11:17.456174 510536 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0111 08:11:17.456250 510536 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0111 08:11:17.456309 510536 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0111 08:11:17.456404 510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0111 08:11:17.456555 510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0111 08:11:17.456685 510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0111 08:11:17.456751 510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0111 08:11:17.461728 510536 out.go:252] - Generating certificates and keys ...
I0111 08:11:17.461862 510536 kubeadm.go:319] [certs] Using existing ca certificate authority
I0111 08:11:17.461936 510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0111 08:11:17.462020 510536 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0111 08:11:17.462086 510536 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I0111 08:11:17.462160 510536 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I0111 08:11:17.462218 510536 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I0111 08:11:17.462283 510536 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I0111 08:11:17.462345 510536 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I0111 08:11:17.462426 510536 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0111 08:11:17.462501 510536 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0111 08:11:17.462539 510536 kubeadm.go:319] [certs] Using the existing "sa" key
I0111 08:11:17.462595 510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0111 08:11:17.462647 510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0111 08:11:17.462704 510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0111 08:11:17.462757 510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0111 08:11:17.462821 510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0111 08:11:17.462945 510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0111 08:11:17.463059 510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0111 08:11:17.463156 510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0111 08:11:17.466057 510536 out.go:252] - Booting up control plane ...
I0111 08:11:17.466204 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0111 08:11:17.466297 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0111 08:11:17.466399 510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0111 08:11:17.466523 510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0111 08:11:17.466625 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0111 08:11:17.466736 510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0111 08:11:17.466873 510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0111 08:11:17.466916 510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0111 08:11:17.467064 510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0111 08:11:17.467174 510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0111 08:11:17.467272 510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000843963s
I0111 08:11:17.467285 510536 kubeadm.go:319]
I0111 08:11:17.467355 510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0111 08:11:17.467421 510536 kubeadm.go:319] - The kubelet is not running
I0111 08:11:17.467571 510536 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0111 08:11:17.467585 510536 kubeadm.go:319]
I0111 08:11:17.467699 510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0111 08:11:17.467740 510536 kubeadm.go:319] - 'systemctl status kubelet'
I0111 08:11:17.467774 510536 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0111 08:11:17.467797 510536 kubeadm.go:319]
I0111 08:11:17.467843 510536 kubeadm.go:403] duration metric: took 8m6.858627939s to StartCluster
I0111 08:11:17.467883 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0111 08:11:17.467954 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0111 08:11:17.510393 510536 cri.go:96] found id: ""
I0111 08:11:17.510436 510536 logs.go:282] 0 containers: []
W0111 08:11:17.510445 510536 logs.go:284] No container was found matching "kube-apiserver"
I0111 08:11:17.510454 510536 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0111 08:11:17.510520 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0111 08:11:17.540055 510536 cri.go:96] found id: ""
I0111 08:11:17.540090 510536 logs.go:282] 0 containers: []
W0111 08:11:17.540099 510536 logs.go:284] No container was found matching "etcd"
I0111 08:11:17.540106 510536 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0111 08:11:17.540168 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0111 08:11:17.568990 510536 cri.go:96] found id: ""
I0111 08:11:17.569063 510536 logs.go:282] 0 containers: []
W0111 08:11:17.569086 510536 logs.go:284] No container was found matching "coredns"
I0111 08:11:17.569106 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0111 08:11:17.569199 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0111 08:11:17.598546 510536 cri.go:96] found id: ""
I0111 08:11:17.598624 510536 logs.go:282] 0 containers: []
W0111 08:11:17.598647 510536 logs.go:284] No container was found matching "kube-scheduler"
I0111 08:11:17.598667 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0111 08:11:17.598751 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0111 08:11:17.650676 510536 cri.go:96] found id: ""
I0111 08:11:17.650750 510536 logs.go:282] 0 containers: []
W0111 08:11:17.650773 510536 logs.go:284] No container was found matching "kube-proxy"
I0111 08:11:17.650794 510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0111 08:11:17.650928 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0111 08:11:17.684396 510536 cri.go:96] found id: ""
I0111 08:11:17.684474 510536 logs.go:282] 0 containers: []
W0111 08:11:17.684505 510536 logs.go:284] No container was found matching "kube-controller-manager"
I0111 08:11:17.684527 510536 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0111 08:11:17.684636 510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0111 08:11:17.745829 510536 cri.go:96] found id: ""
I0111 08:11:17.745873 510536 logs.go:282] 0 containers: []
W0111 08:11:17.745883 510536 logs.go:284] No container was found matching "kindnet"
I0111 08:11:17.745892 510536 logs.go:123] Gathering logs for kubelet ...
I0111 08:11:17.745930 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0111 08:11:17.828347 510536 logs.go:123] Gathering logs for dmesg ...
I0111 08:11:17.828383 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0111 08:11:17.850604 510536 logs.go:123] Gathering logs for describe nodes ...
I0111 08:11:17.850630 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0111 08:11:17.973516 510536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:11:17.963026 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.963926 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967222 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967572 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.969066 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0111 08:11:17.963026 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.963926 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967222 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.967572 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:17.969066 5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0111 08:11:17.973541 510536 logs.go:123] Gathering logs for Docker ...
I0111 08:11:17.973554 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0111 08:11:18.001046 510536 logs.go:123] Gathering logs for container status ...
I0111 08:11:18.001086 510536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0111 08:11:18.046288 510536 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.046406 510536 out.go:285] *
W0111 08:11:18.046610 510536 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.046721 510536 out.go:285] *
W0111 08:11:18.047132 510536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0111 08:11:18.052816 510536 out.go:203]
W0111 08:11:18.055641 510536 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000843963s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0111 08:11:18.055919 510536 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0111 08:11:18.055975 510536 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0111 08:11:18.060639 510536 out.go:203]
I0111 08:11:17.393179 520924 out.go:252] - Configuring RBAC rules ...
I0111 08:11:17.393307 520924 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0111 08:11:17.397444 520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0111 08:11:17.407118 520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0111 08:11:17.412368 520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0111 08:11:17.417190 520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0111 08:11:17.423219 520924 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0111 08:11:17.702432 520924 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0111 08:11:18.163794 520924 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I0111 08:11:18.702151 520924 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I0111 08:11:18.703799 520924 kubeadm.go:319]
I0111 08:11:18.703882 520924 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I0111 08:11:18.703888 520924 kubeadm.go:319]
I0111 08:11:18.703965 520924 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I0111 08:11:18.703969 520924 kubeadm.go:319]
I0111 08:11:18.703994 520924 kubeadm.go:319] mkdir -p $HOME/.kube
I0111 08:11:18.704465 520924 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0111 08:11:18.704530 520924 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0111 08:11:18.704535 520924 kubeadm.go:319]
I0111 08:11:18.704589 520924 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I0111 08:11:18.704592 520924 kubeadm.go:319]
I0111 08:11:18.704640 520924 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I0111 08:11:18.704643 520924 kubeadm.go:319]
I0111 08:11:18.704701 520924 kubeadm.go:319] You should now deploy a pod network to the cluster.
I0111 08:11:18.704776 520924 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0111 08:11:18.704844 520924 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0111 08:11:18.704850 520924 kubeadm.go:319]
I0111 08:11:18.705164 520924 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I0111 08:11:18.705247 520924 kubeadm.go:319] and service account keys on each node and then running the following as root:
I0111 08:11:18.705251 520924 kubeadm.go:319]
I0111 08:11:18.705546 520924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bwyv57.9wca10o27ezxy0ff \
I0111 08:11:18.705654 520924 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:818ce15c86fa4793707dcde7e618897f3968773ad82953fed09116f6b0602c24 \
I0111 08:11:18.705876 520924 kubeadm.go:319] --control-plane
I0111 08:11:18.705885 520924 kubeadm.go:319]
I0111 08:11:18.706157 520924 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I0111 08:11:18.706166 520924 kubeadm.go:319]
I0111 08:11:18.706472 520924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bwyv57.9wca10o27ezxy0ff \
I0111 08:11:18.706758 520924 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:818ce15c86fa4793707dcde7e618897f3968773ad82953fed09116f6b0602c24
I0111 08:11:18.713780 520924 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0111 08:11:18.714408 520924 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0111 08:11:18.714535 520924 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0111 08:11:18.714546 520924 cni.go:84] Creating CNI manager for ""
I0111 08:11:18.714560 520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0111 08:11:18.718177 520924 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I0111 08:11:18.721094 520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0111 08:11:18.795763 520924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0111 08:11:18.840500 520924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0111 08:11:18.840628 520924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0111 08:11:18.840704 520924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes docker-flags-747538 minikube.k8s.io/updated_at=2026_01_11T08_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=docker-flags-747538 minikube.k8s.io/primary=true
I0111 08:11:19.064744 520924 ops.go:34] apiserver oom_adj: -16
I0111 08:11:19.064755 520924 kubeadm.go:1114] duration metric: took 224.176624ms to wait for elevateKubeSystemPrivileges
I0111 08:11:19.064780 520924 kubeadm.go:403] duration metric: took 14.402621199s to StartCluster
I0111 08:11:19.064804 520924 settings.go:142] acquiring lock: {Name:mk2450911e4e3da6233070d23405462f9cda31b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:19.064878 520924 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22402-276769/kubeconfig
I0111 08:11:19.065488 520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/kubeconfig: {Name:mk23bbe94b13868b5365bf437bc6e69ac4646cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0111 08:11:19.065714 520924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0111 08:11:19.065715 520924 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0111 08:11:19.065976 520924 config.go:182] Loaded profile config "docker-flags-747538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 08:11:19.068850 520924 out.go:179] * Verifying Kubernetes components...
I0111 08:11:19.071842 520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
==> Docker <==
Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.828330129Z" level=info msg="Restoring containers: start."
Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.847273714Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.863311887Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.073563281Z" level=info msg="Loading containers: done."
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.084984648Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.085054701Z" level=info msg="Docker daemon" commit=08440b6 containerd-snapshotter=false storage-driver=overlay2 version=29.1.4
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.085108271Z" level=info msg="Initializing buildkit"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.105098506Z" level=info msg="Completed buildkit initialization"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.110602123Z" level=info msg="Daemon has completed initialization"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.110777699Z" level=info msg="API listen on /var/run/docker.sock"
Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.112617910Z" level=info msg="API listen on /run/docker.sock"
Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.112699787Z" level=info msg="API listen on [::]:2376"
Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Start docker client with request timeout 0s"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Loaded network plugin cni"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Docker cri networking managed by network plugin cni"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Setting cgroupDriver systemd"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Start cri-dockerd grpc backend"
Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0111 08:11:20.076993 5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:20.078022 5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:20.079887 5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:20.080435 5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0111 08:11:20.082060 5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Jan11 06:45] overlayfs: idmapped layers are currently not supported
[Jan11 06:46] overlayfs: idmapped layers are currently not supported
[Jan11 06:47] overlayfs: idmapped layers are currently not supported
[Jan11 06:56] overlayfs: idmapped layers are currently not supported
[ +5.181200] overlayfs: idmapped layers are currently not supported
[Jan11 07:00] overlayfs: idmapped layers are currently not supported
[Jan11 07:01] overlayfs: idmapped layers are currently not supported
[Jan11 07:06] overlayfs: idmapped layers are currently not supported
[Jan11 07:07] overlayfs: idmapped layers are currently not supported
[Jan11 07:08] overlayfs: idmapped layers are currently not supported
[Jan11 07:09] overlayfs: idmapped layers are currently not supported
[ +36.684603] overlayfs: idmapped layers are currently not supported
[Jan11 07:10] overlayfs: idmapped layers are currently not supported
[Jan11 07:11] overlayfs: idmapped layers are currently not supported
[Jan11 07:12] overlayfs: idmapped layers are currently not supported
[ +18.034227] overlayfs: idmapped layers are currently not supported
[Jan11 07:13] overlayfs: idmapped layers are currently not supported
[Jan11 07:14] overlayfs: idmapped layers are currently not supported
[Jan11 07:15] overlayfs: idmapped layers are currently not supported
[ +23.411747] overlayfs: idmapped layers are currently not supported
[Jan11 07:16] overlayfs: idmapped layers are currently not supported
[ +26.028245] overlayfs: idmapped layers are currently not supported
[Jan11 07:17] overlayfs: idmapped layers are currently not supported
[Jan11 07:18] overlayfs: idmapped layers are currently not supported
[Jan11 07:23] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
08:11:20 up 2:53, 0 user, load average: 2.83, 1.43, 1.95
Linux force-systemd-flag-176470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Jan 11 08:11:16 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:17 force-systemd-flag-176470 kubelet[5567]: E0111 08:11:17.722676 5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:18 force-systemd-flag-176470 kubelet[5615]: E0111 08:11:18.475424 5615 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:19 force-systemd-flag-176470 kubelet[5661]: E0111 08:11:19.235128 5661 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 11 08:11:19 force-systemd-flag-176470 kubelet[5728]: E0111 08:11:19.950698 5728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-176470 -n force-systemd-flag-176470
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-176470 -n force-systemd-flag-176470: exit status 6 (368.205038ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0111 08:11:20.564716 524577 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-176470" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-176470" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-176470" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-176470
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-176470: (2.039331365s)
--- FAIL: TestForceSystemdFlag (507.47s)