=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E0110 09:07:17.255612 4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m19.946280713s)
-- stdout --
* [force-systemd-flag-447307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22427
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-447307" primary control-plane node in "force-systemd-flag-447307" cluster
* Pulling base image v0.0.48-1767944074-22401 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I0110 09:05:33.409464 209870 out.go:360] Setting OutFile to fd 1 ...
I0110 09:05:33.409589 209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:05:33.409600 209870 out.go:374] Setting ErrFile to fd 2...
I0110 09:05:33.409606 209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:05:33.409935 209870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 09:05:33.410391 209870 out.go:368] Setting JSON to false
I0110 09:05:33.411232 209870 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2887,"bootTime":1768033047,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0110 09:05:33.411330 209870 start.go:143] virtualization:
I0110 09:05:33.415136 209870 out.go:179] * [force-systemd-flag-447307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0110 09:05:33.419458 209870 out.go:179] - MINIKUBE_LOCATION=22427
I0110 09:05:33.419525 209870 notify.go:221] Checking for updates...
I0110 09:05:33.425912 209870 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0110 09:05:33.429209 209870 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
I0110 09:05:33.432347 209870 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
I0110 09:05:33.435460 209870 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0110 09:05:33.438570 209870 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0110 09:05:33.442372 209870 config.go:182] Loaded profile config "force-systemd-env-562333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 09:05:33.442573 209870 driver.go:422] Setting default libvirt URI to qemu:///system
I0110 09:05:33.476555 209870 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0110 09:05:33.476689 209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:05:33.532767 209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.522514518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:05:33.532884 209870 docker.go:319] overlay module found
I0110 09:05:33.536216 209870 out.go:179] * Using the docker driver based on user configuration
I0110 09:05:33.539266 209870 start.go:309] selected driver: docker
I0110 09:05:33.539290 209870 start.go:928] validating driver "docker" against <nil>
I0110 09:05:33.539304 209870 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0110 09:05:33.540257 209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:05:33.606717 209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.597485332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:05:33.606880 209870 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0110 09:05:33.607150 209870 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I0110 09:05:33.610109 209870 out.go:179] * Using Docker driver with root privileges
I0110 09:05:33.613052 209870 cni.go:84] Creating CNI manager for ""
I0110 09:05:33.613123 209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0110 09:05:33.613137 209870 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I0110 09:05:33.613210 209870 start.go:353] cluster config:
{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 09:05:33.616369 209870 out.go:179] * Starting "force-systemd-flag-447307" primary control-plane node in "force-systemd-flag-447307" cluster
I0110 09:05:33.619253 209870 cache.go:134] Beginning downloading kic base image for docker with containerd
I0110 09:05:33.622283 209870 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
I0110 09:05:33.625240 209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:05:33.625284 209870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I0110 09:05:33.625294 209870 cache.go:65] Caching tarball of preloaded images
I0110 09:05:33.625329 209870 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
I0110 09:05:33.625389 209870 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0110 09:05:33.625401 209870 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I0110 09:05:33.625502 209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
I0110 09:05:33.625518 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json: {Name:mkf2d31f6f9a10b94727bf46c1c457843d8705ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:33.646574 209870 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
I0110 09:05:33.646596 209870 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
I0110 09:05:33.646616 209870 cache.go:243] Successfully downloaded all kic artifacts
I0110 09:05:33.646655 209870 start.go:360] acquireMachinesLock for force-systemd-flag-447307: {Name:mkd48671d04edb3bc812df6ed361a4acb7311dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0110 09:05:33.646759 209870 start.go:364] duration metric: took 84.121µs to acquireMachinesLock for "force-systemd-flag-447307"
I0110 09:05:33.646788 209870 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0110 09:05:33.646856 209870 start.go:125] createHost starting for "" (driver="docker")
I0110 09:05:33.650271 209870 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0110 09:05:33.650508 209870 start.go:159] libmachine.API.Create for "force-systemd-flag-447307" (driver="docker")
I0110 09:05:33.650544 209870 client.go:173] LocalClient.Create starting
I0110 09:05:33.650632 209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
I0110 09:05:33.650669 209870 main.go:144] libmachine: Decoding PEM data...
I0110 09:05:33.650699 209870 main.go:144] libmachine: Parsing certificate...
I0110 09:05:33.650748 209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
I0110 09:05:33.650798 209870 main.go:144] libmachine: Decoding PEM data...
I0110 09:05:33.650814 209870 main.go:144] libmachine: Parsing certificate...
I0110 09:05:33.651204 209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 09:05:33.667215 209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 09:05:33.667320 209870 network_create.go:284] running [docker network inspect force-systemd-flag-447307] to gather additional debugging logs...
I0110 09:05:33.667372 209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307
W0110 09:05:33.687489 209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 returned with exit code 1
I0110 09:05:33.687524 209870 network_create.go:287] error running [docker network inspect force-systemd-flag-447307]: docker network inspect force-systemd-flag-447307: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-447307 not found
I0110 09:05:33.687543 209870 network_create.go:289] output of [docker network inspect force-systemd-flag-447307]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-447307 not found
** /stderr **
I0110 09:05:33.687651 209870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 09:05:33.707102 209870 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
I0110 09:05:33.707525 209870 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
I0110 09:05:33.707892 209870 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
I0110 09:05:33.708300 209870 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e0acd7192481 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:16:f7:84:76:30} reservation:<nil>}
I0110 09:05:33.708837 209870 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001930810}
I0110 09:05:33.708905 209870 network_create.go:124] attempt to create docker network force-systemd-flag-447307 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0110 09:05:33.708992 209870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-447307 force-systemd-flag-447307
I0110 09:05:33.788655 209870 network_create.go:108] docker network force-systemd-flag-447307 192.168.85.0/24 created
I0110 09:05:33.788695 209870 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-447307" container
I0110 09:05:33.788778 209870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0110 09:05:33.805636 209870 cli_runner.go:164] Run: docker volume create force-systemd-flag-447307 --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true
I0110 09:05:33.825622 209870 oci.go:103] Successfully created a docker volume force-systemd-flag-447307
I0110 09:05:33.825717 209870 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-447307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --entrypoint /usr/bin/test -v force-systemd-flag-447307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
I0110 09:05:34.392818 209870 oci.go:107] Successfully prepared a docker volume force-systemd-flag-447307
I0110 09:05:34.392892 209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:05:34.392910 209870 kic.go:194] Starting extracting preloaded images to volume ...
I0110 09:05:34.392989 209870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
I0110 09:05:38.304331 209870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.911286648s)
I0110 09:05:38.304367 209870 kic.go:203] duration metric: took 3.911453699s to extract preloaded images to volume ...
W0110 09:05:38.304501 209870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0110 09:05:38.304616 209870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0110 09:05:38.370965 209870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-447307 --name force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-447307 --network force-systemd-flag-447307 --ip 192.168.85.2 --volume force-systemd-flag-447307:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
I0110 09:05:38.714423 209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Running}}
I0110 09:05:38.748378 209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
I0110 09:05:38.768410 209870 cli_runner.go:164] Run: docker exec force-systemd-flag-447307 stat /var/lib/dpkg/alternatives/iptables
I0110 09:05:38.820953 209870 oci.go:144] the created container "force-systemd-flag-447307" has a running status.
I0110 09:05:38.820980 209870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa...
I0110 09:05:39.091967 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0110 09:05:39.092015 209870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0110 09:05:39.120080 209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
I0110 09:05:39.153524 209870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0110 09:05:39.153549 209870 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-447307 chown docker:docker /home/docker/.ssh/authorized_keys]
I0110 09:05:39.219499 209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
I0110 09:05:39.244109 209870 machine.go:94] provisionDockerMachine start ...
I0110 09:05:39.244193 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:39.268479 209870 main.go:144] libmachine: Using SSH client type: native
I0110 09:05:39.268982 209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33044 <nil> <nil>}
I0110 09:05:39.268997 209870 main.go:144] libmachine: About to run SSH command:
hostname
I0110 09:05:39.269646 209870 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0110 09:05:42.418786 209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
I0110 09:05:42.418815 209870 ubuntu.go:182] provisioning hostname "force-systemd-flag-447307"
I0110 09:05:42.418891 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:42.436389 209870 main.go:144] libmachine: Using SSH client type: native
I0110 09:05:42.436711 209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33044 <nil> <nil>}
I0110 09:05:42.436734 209870 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-447307 && echo "force-systemd-flag-447307" | sudo tee /etc/hostname
I0110 09:05:42.592363 209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
I0110 09:05:42.592444 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:42.609168 209870 main.go:144] libmachine: Using SSH client type: native
I0110 09:05:42.609485 209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33044 <nil> <nil>}
I0110 09:05:42.609511 209870 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-447307' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-447307/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-447307' | sudo tee -a /etc/hosts;
fi
fi
I0110 09:05:42.763850 209870 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0110 09:05:42.763885 209870 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
I0110 09:05:42.763907 209870 ubuntu.go:190] setting up certificates
I0110 09:05:42.763917 209870 provision.go:84] configureAuth start
I0110 09:05:42.763975 209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
I0110 09:05:42.780237 209870 provision.go:143] copyHostCerts
I0110 09:05:42.780278 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
I0110 09:05:42.780310 209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
I0110 09:05:42.780322 209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
I0110 09:05:42.780397 209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
I0110 09:05:42.780483 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
I0110 09:05:42.780504 209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
I0110 09:05:42.780509 209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
I0110 09:05:42.780582 209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
I0110 09:05:42.780638 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
I0110 09:05:42.780660 209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
I0110 09:05:42.780668 209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
I0110 09:05:42.780694 209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
I0110 09:05:42.780745 209870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-447307 san=[127.0.0.1 192.168.85.2 force-systemd-flag-447307 localhost minikube]
I0110 09:05:43.091195 209870 provision.go:177] copyRemoteCerts
I0110 09:05:43.091276 209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0110 09:05:43.091317 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:43.112972 209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
I0110 09:05:43.219219 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0110 09:05:43.219278 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0110 09:05:43.236969 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem -> /etc/docker/server.pem
I0110 09:05:43.237036 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0110 09:05:43.254736 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0110 09:05:43.254810 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0110 09:05:43.272505 209870 provision.go:87] duration metric: took 508.564973ms to configureAuth
I0110 09:05:43.272534 209870 ubuntu.go:206] setting minikube options for container-runtime
I0110 09:05:43.272716 209870 config.go:182] Loaded profile config "force-systemd-flag-447307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 09:05:43.272731 209870 machine.go:97] duration metric: took 4.028601641s to provisionDockerMachine
I0110 09:05:43.272739 209870 client.go:176] duration metric: took 9.622186198s to LocalClient.Create
I0110 09:05:43.272758 209870 start.go:167] duration metric: took 9.622250757s to libmachine.API.Create "force-systemd-flag-447307"
I0110 09:05:43.272767 209870 start.go:293] postStartSetup for "force-systemd-flag-447307" (driver="docker")
I0110 09:05:43.272776 209870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0110 09:05:43.272844 209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0110 09:05:43.272890 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:43.291040 209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
I0110 09:05:43.399676 209870 ssh_runner.go:195] Run: cat /etc/os-release
I0110 09:05:43.403118 209870 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0110 09:05:43.403149 209870 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0110 09:05:43.403161 209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
I0110 09:05:43.403215 209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
I0110 09:05:43.403296 209870 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
I0110 09:05:43.403307 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /etc/ssl/certs/42572.pem
I0110 09:05:43.403441 209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0110 09:05:43.411262 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
I0110 09:05:43.428973 209870 start.go:296] duration metric: took 156.191974ms for postStartSetup
I0110 09:05:43.429327 209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
I0110 09:05:43.449128 209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
I0110 09:05:43.449426 209870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0110 09:05:43.449470 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:43.469062 209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
I0110 09:05:43.568491 209870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0110 09:05:43.573119 209870 start.go:128] duration metric: took 9.926249482s to createHost
I0110 09:05:43.573146 209870 start.go:83] releasing machines lock for "force-systemd-flag-447307", held for 9.926372964s
I0110 09:05:43.573217 209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
I0110 09:05:43.590249 209870 ssh_runner.go:195] Run: cat /version.json
I0110 09:05:43.590305 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:43.590572 209870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0110 09:05:43.590643 209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
I0110 09:05:43.615492 209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
I0110 09:05:43.617966 209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
I0110 09:05:43.715397 209870 ssh_runner.go:195] Run: systemctl --version
I0110 09:05:43.819535 209870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0110 09:05:43.823893 209870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0110 09:05:43.824019 209870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0110 09:05:43.851657 209870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0110 09:05:43.851693 209870 start.go:496] detecting cgroup driver to use...
I0110 09:05:43.851707 209870 start.go:500] using "systemd" cgroup driver as enforced via flags
I0110 09:05:43.851778 209870 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0110 09:05:43.867281 209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0110 09:05:43.880163 209870 docker.go:218] disabling cri-docker service (if available) ...
I0110 09:05:43.880224 209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0110 09:05:43.897601 209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0110 09:05:43.916022 209870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0110 09:05:44.034195 209870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0110 09:05:44.157732 209870 docker.go:234] disabling docker service ...
I0110 09:05:44.157806 209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0110 09:05:44.182671 209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0110 09:05:44.199192 209870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0110 09:05:44.328856 209870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0110 09:05:44.450963 209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0110 09:05:44.463783 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0110 09:05:44.479468 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0110 09:05:44.488749 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0110 09:05:44.497642 209870 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I0110 09:05:44.497707 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0110 09:05:44.506787 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 09:05:44.516077 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0110 09:05:44.524994 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 09:05:44.533763 209870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0110 09:05:44.542113 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0110 09:05:44.551294 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0110 09:05:44.560593 209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0110 09:05:44.569667 209870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0110 09:05:44.577424 209870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0110 09:05:44.585163 209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 09:05:44.695011 209870 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0110 09:05:44.824179 209870 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I0110 09:05:44.824246 209870 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0110 09:05:44.828299 209870 start.go:574] Will wait 60s for crictl version
I0110 09:05:44.828398 209870 ssh_runner.go:195] Run: which crictl
I0110 09:05:44.831917 209870 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0110 09:05:44.856160 209870 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I0110 09:05:44.856261 209870 ssh_runner.go:195] Run: containerd --version
I0110 09:05:44.877321 209870 ssh_runner.go:195] Run: containerd --version
I0110 09:05:44.901544 209870 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I0110 09:05:44.904468 209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 09:05:44.921071 209870 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0110 09:05:44.924958 209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 09:05:44.935964 209870 kubeadm.go:884] updating cluster {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0110 09:05:44.936082 209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:05:44.936148 209870 ssh_runner.go:195] Run: sudo crictl images --output json
I0110 09:05:44.963934 209870 containerd.go:635] all images are preloaded for containerd runtime.
I0110 09:05:44.963961 209870 containerd.go:542] Images already preloaded, skipping extraction
I0110 09:05:44.964020 209870 ssh_runner.go:195] Run: sudo crictl images --output json
I0110 09:05:44.993776 209870 containerd.go:635] all images are preloaded for containerd runtime.
I0110 09:05:44.993799 209870 cache_images.go:86] Images are preloaded, skipping loading
I0110 09:05:44.993808 209870 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I0110 09:05:44.993914 209870 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-447307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0110 09:05:44.993982 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I0110 09:05:45.035190 209870 cni.go:84] Creating CNI manager for ""
I0110 09:05:45.035214 209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0110 09:05:45.035241 209870 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0110 09:05:45.035266 209870 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-447307 NodeName:force-systemd-flag-447307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0110 09:05:45.035486 209870 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-447307"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0110 09:05:45.035574 209870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0110 09:05:45.067399 209870 binaries.go:51] Found k8s binaries, skipping transfer
I0110 09:05:45.067510 209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0110 09:05:45.092872 209870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I0110 09:05:45.121773 209870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0110 09:05:45.154954 209870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I0110 09:05:45.231953 209870 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0110 09:05:45.237475 209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 09:05:45.260281 209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 09:05:45.419211 209870 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0110 09:05:45.437873 209870 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307 for IP: 192.168.85.2
I0110 09:05:45.437942 209870 certs.go:195] generating shared ca certs ...
I0110 09:05:45.437991 209870 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.438190 209870 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
I0110 09:05:45.438256 209870 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
I0110 09:05:45.438302 209870 certs.go:257] generating profile certs ...
I0110 09:05:45.438386 209870 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key
I0110 09:05:45.438435 209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt with IP's: []
I0110 09:05:45.568734 209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt ...
I0110 09:05:45.568768 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt: {Name:mk93119e0751f692d1add2634b06b07d570f7c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.568970 209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key ...
I0110 09:05:45.568988 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key: {Name:mkd0ec99179f57a4bf574d82b9d5dd3231ca72d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.569084 209870 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d
I0110 09:05:45.569103 209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0110 09:05:45.634799 209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d ...
I0110 09:05:45.634831 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d: {Name:mk1f93a1a18d813cb88fd475e0986fb6bcc9bd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.635018 209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d ...
I0110 09:05:45.635033 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d: {Name:mkc43e75e3e468932f9ce36624b08b9cf784c70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.635122 209870 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt
I0110 09:05:45.635249 209870 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key
I0110 09:05:45.635318 209870 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key
I0110 09:05:45.635336 209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt with IP's: []
I0110 09:05:45.872246 209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt ...
I0110 09:05:45.872281 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt: {Name:mka6e1c552726af90963b0c4641d45cc7689a203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.872469 209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key ...
I0110 09:05:45.872484 209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key: {Name:mk8e5271296bc709b5c836c748d108f6bf8306ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:05:45.872565 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0110 09:05:45.872587 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0110 09:05:45.872599 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0110 09:05:45.872615 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0110 09:05:45.872633 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0110 09:05:45.872650 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0110 09:05:45.872666 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0110 09:05:45.872681 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0110 09:05:45.872732 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
W0110 09:05:45.872776 209870 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
I0110 09:05:45.872788 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
I0110 09:05:45.872823 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
I0110 09:05:45.872851 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
I0110 09:05:45.872886 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
I0110 09:05:45.872937 209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
I0110 09:05:45.872975 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem -> /usr/share/ca-certificates/4257.pem
I0110 09:05:45.872997 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /usr/share/ca-certificates/42572.pem
I0110 09:05:45.873020 209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0110 09:05:45.873565 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0110 09:05:45.894181 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0110 09:05:45.914336 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0110 09:05:45.933562 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0110 09:05:45.952739 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0110 09:05:45.971678 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0110 09:05:45.990590 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0110 09:05:46.009080 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0110 09:05:46.029612 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
I0110 09:05:46.049043 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
I0110 09:05:46.066769 209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0110 09:05:46.086243 209870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0110 09:05:46.099249 209870 ssh_runner.go:195] Run: openssl version
I0110 09:05:46.106159 209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0110 09:05:46.113597 209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0110 09:05:46.121108 209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0110 09:05:46.124874 209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
I0110 09:05:46.124949 209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0110 09:05:46.165876 209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0110 09:05:46.173607 209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0110 09:05:46.181172 209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
I0110 09:05:46.189176 209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
I0110 09:05:46.197604 209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
I0110 09:05:46.202316 209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
I0110 09:05:46.202452 209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
I0110 09:05:46.244731 209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0110 09:05:46.252500 209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
I0110 09:05:46.260155 209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
I0110 09:05:46.267974 209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
I0110 09:05:46.276015 209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
I0110 09:05:46.280136 209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
I0110 09:05:46.280201 209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
I0110 09:05:46.321282 209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0110 09:05:46.328915 209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
I0110 09:05:46.336599 209870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0110 09:05:46.340309 209870 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0110 09:05:46.340362 209870 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 09:05:46.340440 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0110 09:05:46.340505 209870 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0110 09:05:46.368732 209870 cri.go:96] found id: ""
I0110 09:05:46.368825 209870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0110 09:05:46.377083 209870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0110 09:05:46.385046 209870 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0110 09:05:46.385169 209870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0110 09:05:46.393422 209870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0110 09:05:46.393446 209870 kubeadm.go:158] found existing configuration files:
I0110 09:05:46.393528 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0110 09:05:46.402057 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0110 09:05:46.402155 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0110 09:05:46.409739 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0110 09:05:46.417579 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0110 09:05:46.417663 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0110 09:05:46.425416 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0110 09:05:46.433477 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0110 09:05:46.433598 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0110 09:05:46.442123 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0110 09:05:46.453573 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0110 09:05:46.453686 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0110 09:05:46.464707 209870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0110 09:05:46.525523 209870 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0110 09:05:46.525947 209870 kubeadm.go:319] [preflight] Running pre-flight checks
I0110 09:05:46.597987 209870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0110 09:05:46.598061 209870 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0110 09:05:46.598113 209870 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0110 09:05:46.598166 209870 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0110 09:05:46.598220 209870 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0110 09:05:46.598270 209870 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0110 09:05:46.598320 209870 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0110 09:05:46.598379 209870 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0110 09:05:46.598434 209870 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0110 09:05:46.598482 209870 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0110 09:05:46.598540 209870 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0110 09:05:46.598589 209870 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0110 09:05:46.662544 209870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0110 09:05:46.662658 209870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0110 09:05:46.662754 209870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0110 09:05:46.671756 209870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0110 09:05:46.678150 209870 out.go:252] - Generating certificates and keys ...
I0110 09:05:46.678326 209870 kubeadm.go:319] [certs] Using existing ca certificate authority
I0110 09:05:46.678444 209870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0110 09:05:47.409478 209870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0110 09:05:47.578923 209870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0110 09:05:47.675285 209870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0110 09:05:47.915407 209870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0110 09:05:48.056354 209870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0110 09:05:48.056768 209870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0110 09:05:48.397487 209870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0110 09:05:48.397857 209870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0110 09:05:48.490818 209870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0110 09:05:48.893329 209870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0110 09:05:49.168813 209870 kubeadm.go:319] [certs] Generating "sa" key and public key
I0110 09:05:49.169088 209870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 09:05:49.386189 209870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0110 09:05:49.640500 209870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0110 09:05:50.248302 209870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 09:05:50.303575 209870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 09:05:50.498195 209870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 09:05:50.498841 209870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 09:05:50.501376 209870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0110 09:05:50.505131 209870 out.go:252] - Booting up control plane ...
I0110 09:05:50.505260 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 09:05:50.505353 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 09:05:50.505445 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 09:05:50.521530 209870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0110 09:05:50.521669 209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0110 09:05:50.530142 209870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0110 09:05:50.530443 209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0110 09:05:50.530495 209870 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0110 09:05:50.669341 209870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0110 09:05:50.669965 209870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0110 09:09:50.670468 209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000404613s
I0110 09:09:50.675514 209870 kubeadm.go:319]
I0110 09:09:50.675647 209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:09:50.675714 209870 kubeadm.go:319] - The kubelet is not running
I0110 09:09:50.675911 209870 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:09:50.675926 209870 kubeadm.go:319]
I0110 09:09:50.676109 209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:09:50.676170 209870 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:09:50.676227 209870 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:09:50.676235 209870 kubeadm.go:319]
I0110 09:09:50.676744 209870 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:09:50.677480 209870 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:09:50.677676 209870 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:09:50.678147 209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0110 09:09:50.678158 209870 kubeadm.go:319]
I0110 09:09:50.678277 209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W0110 09:09:50.678412 209870 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000404613s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000404613s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I0110 09:09:50.678496 209870 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0110 09:09:51.095151 209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0110 09:09:51.109734 209870 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0110 09:09:51.109810 209870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0110 09:09:51.119457 209870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0110 09:09:51.119476 209870 kubeadm.go:158] found existing configuration files:
I0110 09:09:51.119530 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0110 09:09:51.128307 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0110 09:09:51.128401 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0110 09:09:51.136766 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0110 09:09:51.145590 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0110 09:09:51.145673 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0110 09:09:51.154563 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0110 09:09:51.163162 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0110 09:09:51.163284 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0110 09:09:51.171783 209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0110 09:09:51.180978 209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0110 09:09:51.181056 209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0110 09:09:51.189977 209870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0110 09:09:51.236476 209870 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0110 09:09:51.236816 209870 kubeadm.go:319] [preflight] Running pre-flight checks
I0110 09:09:51.314053 209870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0110 09:09:51.314136 209870 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0110 09:09:51.314180 209870 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0110 09:09:51.314241 209870 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0110 09:09:51.314296 209870 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0110 09:09:51.314361 209870 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0110 09:09:51.314415 209870 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0110 09:09:51.314467 209870 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0110 09:09:51.314534 209870 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0110 09:09:51.314605 209870 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0110 09:09:51.314672 209870 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0110 09:09:51.314725 209870 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0110 09:09:51.388648 209870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0110 09:09:51.388866 209870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0110 09:09:51.389020 209870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0110 09:09:51.395580 209870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0110 09:09:51.399132 209870 out.go:252] - Generating certificates and keys ...
I0110 09:09:51.399241 209870 kubeadm.go:319] [certs] Using existing ca certificate authority
I0110 09:09:51.399316 209870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0110 09:09:51.399477 209870 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0110 09:09:51.399541 209870 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I0110 09:09:51.399611 209870 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I0110 09:09:51.399669 209870 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I0110 09:09:51.399736 209870 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I0110 09:09:51.400069 209870 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I0110 09:09:51.400430 209870 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0110 09:09:51.400705 209870 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0110 09:09:51.400928 209870 kubeadm.go:319] [certs] Using the existing "sa" key
I0110 09:09:51.401000 209870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 09:09:51.651158 209870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0110 09:09:51.976821 209870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0110 09:09:52.238092 209870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 09:09:52.382407 209870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 09:09:52.599476 209870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 09:09:52.599589 209870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 09:09:52.599662 209870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0110 09:09:52.602908 209870 out.go:252] - Booting up control plane ...
I0110 09:09:52.603023 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 09:09:52.603145 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 09:09:52.605307 209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 09:09:52.628653 209870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0110 09:09:52.628838 209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0110 09:09:52.642380 209870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0110 09:09:52.642489 209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0110 09:09:52.642535 209870 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0110 09:09:52.838446 209870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0110 09:09:52.838573 209870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0110 09:13:52.839263 209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001081313s
I0110 09:13:52.839294 209870 kubeadm.go:319]
I0110 09:13:52.839378 209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:13:52.839416 209870 kubeadm.go:319] - The kubelet is not running
I0110 09:13:52.839522 209870 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:13:52.839531 209870 kubeadm.go:319]
I0110 09:13:52.839635 209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:13:52.839666 209870 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:13:52.839697 209870 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:13:52.839701 209870 kubeadm.go:319]
I0110 09:13:52.844761 209870 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:13:52.845168 209870 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:13:52.845278 209870 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:13:52.845530 209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0110 09:13:52.845541 209870 kubeadm.go:319]
I0110 09:13:52.845606 209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0110 09:13:52.845665 209870 kubeadm.go:403] duration metric: took 8m6.505307114s to StartCluster
I0110 09:13:52.845717 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0110 09:13:52.845786 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0110 09:13:52.876377 209870 cri.go:96] found id: ""
I0110 09:13:52.876416 209870 logs.go:282] 0 containers: []
W0110 09:13:52.876425 209870 logs.go:284] No container was found matching "kube-apiserver"
I0110 09:13:52.876432 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0110 09:13:52.876504 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0110 09:13:52.902015 209870 cri.go:96] found id: ""
I0110 09:13:52.902038 209870 logs.go:282] 0 containers: []
W0110 09:13:52.902047 209870 logs.go:284] No container was found matching "etcd"
I0110 09:13:52.902055 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0110 09:13:52.902130 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0110 09:13:52.930106 209870 cri.go:96] found id: ""
I0110 09:13:52.930126 209870 logs.go:282] 0 containers: []
W0110 09:13:52.930135 209870 logs.go:284] No container was found matching "coredns"
I0110 09:13:52.930141 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0110 09:13:52.930200 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0110 09:13:52.962753 209870 cri.go:96] found id: ""
I0110 09:13:52.962779 209870 logs.go:282] 0 containers: []
W0110 09:13:52.962788 209870 logs.go:284] No container was found matching "kube-scheduler"
I0110 09:13:52.962794 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0110 09:13:52.962852 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0110 09:13:52.988595 209870 cri.go:96] found id: ""
I0110 09:13:52.988621 209870 logs.go:282] 0 containers: []
W0110 09:13:52.988630 209870 logs.go:284] No container was found matching "kube-proxy"
I0110 09:13:52.988637 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0110 09:13:52.988699 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0110 09:13:53.013852 209870 cri.go:96] found id: ""
I0110 09:13:53.013877 209870 logs.go:282] 0 containers: []
W0110 09:13:53.013886 209870 logs.go:284] No container was found matching "kube-controller-manager"
I0110 09:13:53.013893 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0110 09:13:53.013952 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0110 09:13:53.043720 209870 cri.go:96] found id: ""
I0110 09:13:53.043743 209870 logs.go:282] 0 containers: []
W0110 09:13:53.043752 209870 logs.go:284] No container was found matching "kindnet"
I0110 09:13:53.043763 209870 logs.go:123] Gathering logs for kubelet ...
I0110 09:13:53.043775 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0110 09:13:53.104744 209870 logs.go:123] Gathering logs for dmesg ...
I0110 09:13:53.104780 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0110 09:13:53.118863 209870 logs.go:123] Gathering logs for describe nodes ...
I0110 09:13:53.118893 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0110 09:13:53.212815 209870 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:13:53.185939 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.187077 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.203692 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.206842 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.207578 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0110 09:13:53.185939 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.187077 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.203692 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.206842 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.207578 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0110 09:13:53.212852 209870 logs.go:123] Gathering logs for containerd ...
I0110 09:13:53.212865 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0110 09:13:53.257937 209870 logs.go:123] Gathering logs for container status ...
I0110 09:13:53.257973 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0110 09:13:53.288259 209870 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.288311 209870 out.go:285] *
*
W0110 09:13:53.288366 209870 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.288383 209870 out.go:285] *
*
W0110 09:13:53.288643 209870 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0110 09:13:53.293744 209870 out.go:203]
W0110 09:13:53.295826 209870 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.295871 209870 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0110 09:13:53.295893 209870 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0110 09:13:53.298998 209870 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-447307 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 09:13:53.695273409 +0000 UTC m=+3214.877660526
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-447307
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-447307:
-- stdout --
[
{
"Id": "cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820",
"Created": "2026-01-10T09:05:38.387757994Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 210320,
"ExitCode": 0,
"Error": "",
"StartedAt": "2026-01-10T09:05:38.479909691Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
"ResolvConfPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/hostname",
"HostsPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/hosts",
"LogPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820-json.log",
"Name": "/force-systemd-flag-447307",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"force-systemd-flag-447307:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-447307",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820",
"LowerDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956-init/diff:/var/lib/docker/overlay2/54d275d5bf894b41181c968ee2ec1be6f053e8252dc2214525d0175b72739adc/diff",
"MergedDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/merged",
"UpperDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/diff",
"WorkDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "force-systemd-flag-447307",
"Source": "/var/lib/docker/volumes/force-systemd-flag-447307/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "force-systemd-flag-447307",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-447307",
"name.minikube.sigs.k8s.io": "force-systemd-flag-447307",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7a130cb8bfa8d956c8f568fd80fddbaf234fbab1f01b136430c341c0116b6254",
"SandboxKey": "/var/run/docker/netns/7a130cb8bfa8",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33044"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33045"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33048"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33046"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33047"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-447307": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ce:cc:a6:1f:06:d7",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "60d0a65897eac77dac2971ccc13a883191ca99bb752fffc4484eae90b26abc3a",
"EndpointID": "8f1a3fbd028148a59dfafd740a903027257cf1f28d7f93ee13840909ca85b75c",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-447307",
"cb73d53b6fd1"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-447307 -n force-systemd-flag-447307
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-447307 -n force-systemd-flag-447307: exit status 6 (364.927312ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0110 09:13:54.065918 238626 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-447307" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-447307 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ cert-options-050298 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt │ cert-options-050298 │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
│ ssh │ -p cert-options-050298 -- sudo cat /etc/kubernetes/admin.conf │ cert-options-050298 │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
│ delete │ -p cert-options-050298 │ cert-options-050298 │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
│ start │ -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:09 UTC │
│ addons │ enable metrics-server -p old-k8s-version-072756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
│ stop │ -p old-k8s-version-072756 --alsologtostderr -v=3 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
│ addons │ enable dashboard -p old-k8s-version-072756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
│ start │ -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:10 UTC │
│ image │ old-k8s-version-072756 image list --format=json │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
│ pause │ -p old-k8s-version-072756 --alsologtostderr -v=1 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
│ unpause │ -p old-k8s-version-072756 --alsologtostderr -v=1 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
│ delete │ -p old-k8s-version-072756 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
│ delete │ -p old-k8s-version-072756 │ old-k8s-version-072756 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
│ start │ -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:11 UTC │
│ addons │ enable metrics-server -p no-preload-765043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
│ stop │ -p no-preload-765043 --alsologtostderr -v=3 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
│ addons │ enable dashboard -p no-preload-765043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
│ start │ -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:12 UTC │
│ image │ no-preload-765043 image list --format=json │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
│ pause │ -p no-preload-765043 --alsologtostderr -v=1 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
│ unpause │ -p no-preload-765043 --alsologtostderr -v=1 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
│ delete │ -p no-preload-765043 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
│ delete │ -p no-preload-765043 │ no-preload-765043 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
│ start │ -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-070240 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
│ ssh │ force-systemd-flag-447307 ssh cat /etc/containerd/config.toml │ force-systemd-flag-447307 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2026/01/10 09:13:00
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0110 09:13:00.022185 235056 out.go:360] Setting OutFile to fd 1 ...
I0110 09:13:00.022384 235056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:13:00.022411 235056 out.go:374] Setting ErrFile to fd 2...
I0110 09:13:00.022431 235056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:13:00.022741 235056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 09:13:00.023225 235056 out.go:368] Setting JSON to false
I0110 09:13:00.024087 235056 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3333,"bootTime":1768033047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0110 09:13:00.024189 235056 start.go:143] virtualization:
I0110 09:13:00.039116 235056 out.go:179] * [embed-certs-070240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I0110 09:13:00.044442 235056 notify.go:221] Checking for updates...
I0110 09:13:00.052758 235056 out.go:179] - MINIKUBE_LOCATION=22427
I0110 09:13:00.056787 235056 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0110 09:13:00.061873 235056 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
I0110 09:13:00.065162 235056 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
I0110 09:13:00.072465 235056 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0110 09:13:00.076807 235056 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0110 09:13:00.095191 235056 config.go:182] Loaded profile config "force-systemd-flag-447307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 09:13:00.095321 235056 driver.go:422] Setting default libvirt URI to qemu:///system
I0110 09:13:00.156762 235056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I0110 09:13:00.156918 235056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:13:00.288095 235056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:13:00.269580082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:13:00.288276 235056 docker.go:319] overlay module found
I0110 09:13:00.297812 235056 out.go:179] * Using the docker driver based on user configuration
I0110 09:13:00.301002 235056 start.go:309] selected driver: docker
I0110 09:13:00.301028 235056 start.go:928] validating driver "docker" against <nil>
I0110 09:13:00.301044 235056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0110 09:13:00.301981 235056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0110 09:13:00.451243 235056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:13:00.440097007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0110 09:13:00.451646 235056 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I0110 09:13:00.451955 235056 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0110 09:13:00.455194 235056 out.go:179] * Using Docker driver with root privileges
I0110 09:13:00.458322 235056 cni.go:84] Creating CNI manager for ""
I0110 09:13:00.458435 235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0110 09:13:00.458455 235056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I0110 09:13:00.458553 235056 start.go:353] cluster config:
{Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 09:13:00.463831 235056 out.go:179] * Starting "embed-certs-070240" primary control-plane node in "embed-certs-070240" cluster
I0110 09:13:00.466841 235056 cache.go:134] Beginning downloading kic base image for docker with containerd
I0110 09:13:00.470201 235056 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
I0110 09:13:00.473256 235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:13:00.473303 235056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
I0110 09:13:00.473317 235056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I0110 09:13:00.473350 235056 cache.go:65] Caching tarball of preloaded images
I0110 09:13:00.473465 235056 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0110 09:13:00.473477 235056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I0110 09:13:00.473648 235056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json ...
I0110 09:13:00.473682 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json: {Name:mkbe327345a9c10462c0cfeae6ecc074773073dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:00.501418 235056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
I0110 09:13:00.501448 235056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
I0110 09:13:00.501468 235056 cache.go:243] Successfully downloaded all kic artifacts
I0110 09:13:00.501511 235056 start.go:360] acquireMachinesLock for embed-certs-070240: {Name:mkf4458ca775ec5ea65331dd67fbe532fef85672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0110 09:13:00.501630 235056 start.go:364] duration metric: took 96.823µs to acquireMachinesLock for "embed-certs-070240"
I0110 09:13:00.501667 235056 start.go:93] Provisioning new machine with config: &{Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0110 09:13:00.501749 235056 start.go:125] createHost starting for "" (driver="docker")
I0110 09:13:00.505590 235056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0110 09:13:00.505890 235056 start.go:159] libmachine.API.Create for "embed-certs-070240" (driver="docker")
I0110 09:13:00.505937 235056 client.go:173] LocalClient.Create starting
I0110 09:13:00.506108 235056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
I0110 09:13:00.506180 235056 main.go:144] libmachine: Decoding PEM data...
I0110 09:13:00.506209 235056 main.go:144] libmachine: Parsing certificate...
I0110 09:13:00.506313 235056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
I0110 09:13:00.506332 235056 main.go:144] libmachine: Decoding PEM data...
I0110 09:13:00.506343 235056 main.go:144] libmachine: Parsing certificate...
I0110 09:13:00.506731 235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 09:13:00.525610 235056 cli_runner.go:211] docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 09:13:00.525700 235056 network_create.go:284] running [docker network inspect embed-certs-070240] to gather additional debugging logs...
I0110 09:13:00.525739 235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240
W0110 09:13:00.545392 235056 cli_runner.go:211] docker network inspect embed-certs-070240 returned with exit code 1
I0110 09:13:00.545426 235056 network_create.go:287] error running [docker network inspect embed-certs-070240]: docker network inspect embed-certs-070240: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-070240 not found
I0110 09:13:00.545440 235056 network_create.go:289] output of [docker network inspect embed-certs-070240]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-070240 not found
** /stderr **
I0110 09:13:00.545546 235056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 09:13:00.565012 235056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
I0110 09:13:00.565498 235056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
I0110 09:13:00.565964 235056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
I0110 09:13:00.566487 235056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001963f80}
I0110 09:13:00.566510 235056 network_create.go:124] attempt to create docker network embed-certs-070240 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0110 09:13:00.566582 235056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-070240 embed-certs-070240
I0110 09:13:00.629172 235056 network_create.go:108] docker network embed-certs-070240 192.168.76.0/24 created
I0110 09:13:00.629204 235056 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-070240" container
I0110 09:13:00.629285 235056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0110 09:13:00.646345 235056 cli_runner.go:164] Run: docker volume create embed-certs-070240 --label name.minikube.sigs.k8s.io=embed-certs-070240 --label created_by.minikube.sigs.k8s.io=true
I0110 09:13:00.664895 235056 oci.go:103] Successfully created a docker volume embed-certs-070240
I0110 09:13:00.664978 235056 cli_runner.go:164] Run: docker run --rm --name embed-certs-070240-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-070240 --entrypoint /usr/bin/test -v embed-certs-070240:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
I0110 09:13:01.225727 235056 oci.go:107] Successfully prepared a docker volume embed-certs-070240
I0110 09:13:01.225805 235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:13:01.225821 235056 kic.go:194] Starting extracting preloaded images to volume ...
I0110 09:13:01.225900 235056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-070240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
I0110 09:13:05.167153 235056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-070240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.941211081s)
I0110 09:13:05.167188 235056 kic.go:203] duration metric: took 3.941363528s to extract preloaded images to volume ...
W0110 09:13:05.167371 235056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0110 09:13:05.167489 235056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0110 09:13:05.267595 235056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-070240 --name embed-certs-070240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-070240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-070240 --network embed-certs-070240 --ip 192.168.76.2 --volume embed-certs-070240:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
I0110 09:13:05.584752 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Running}}
I0110 09:13:05.603862 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:05.624497 235056 cli_runner.go:164] Run: docker exec embed-certs-070240 stat /var/lib/dpkg/alternatives/iptables
I0110 09:13:05.678402 235056 oci.go:144] the created container "embed-certs-070240" has a running status.
I0110 09:13:05.678429 235056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa...
I0110 09:13:05.778910 235056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0110 09:13:05.796738 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:05.818623 235056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0110 09:13:05.818647 235056 kic_runner.go:114] Args: [docker exec --privileged embed-certs-070240 chown docker:docker /home/docker/.ssh/authorized_keys]
I0110 09:13:05.873216 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:05.894406 235056 machine.go:94] provisionDockerMachine start ...
I0110 09:13:05.894488 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:05.922411 235056 main.go:144] libmachine: Using SSH client type: native
I0110 09:13:05.922748 235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33074 <nil> <nil>}
I0110 09:13:05.922756 235056 main.go:144] libmachine: About to run SSH command:
hostname
I0110 09:13:05.925488 235056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0110 09:13:09.091223 235056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-070240
I0110 09:13:09.091289 235056 ubuntu.go:182] provisioning hostname "embed-certs-070240"
I0110 09:13:09.091399 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:09.109904 235056 main.go:144] libmachine: Using SSH client type: native
I0110 09:13:09.110242 235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33074 <nil> <nil>}
I0110 09:13:09.110261 235056 main.go:144] libmachine: About to run SSH command:
sudo hostname embed-certs-070240 && echo "embed-certs-070240" | sudo tee /etc/hostname
I0110 09:13:09.273863 235056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-070240
I0110 09:13:09.273995 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:09.291523 235056 main.go:144] libmachine: Using SSH client type: native
I0110 09:13:09.291875 235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil> [] 0s} 127.0.0.1 33074 <nil> <nil>}
I0110 09:13:09.291902 235056 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-070240' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-070240/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-070240' | sudo tee -a /etc/hosts;
fi
fi
I0110 09:13:09.440244 235056 main.go:144] libmachine: SSH cmd err, output: <nil>:
I0110 09:13:09.440319 235056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
I0110 09:13:09.440392 235056 ubuntu.go:190] setting up certificates
I0110 09:13:09.440423 235056 provision.go:84] configureAuth start
I0110 09:13:09.440513 235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
I0110 09:13:09.461280 235056 provision.go:143] copyHostCerts
I0110 09:13:09.461356 235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
I0110 09:13:09.461370 235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
I0110 09:13:09.461462 235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
I0110 09:13:09.461561 235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
I0110 09:13:09.461570 235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
I0110 09:13:09.461597 235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
I0110 09:13:09.461679 235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
I0110 09:13:09.461690 235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
I0110 09:13:09.461715 235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
I0110 09:13:09.461767 235056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.embed-certs-070240 san=[127.0.0.1 192.168.76.2 embed-certs-070240 localhost minikube]
I0110 09:13:09.509406 235056 provision.go:177] copyRemoteCerts
I0110 09:13:09.509494 235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0110 09:13:09.509543 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:09.526162 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:09.633243 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0110 09:13:09.652069 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0110 09:13:09.670949 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0110 09:13:09.692661 235056 provision.go:87] duration metric: took 252.199956ms to configureAuth
I0110 09:13:09.692687 235056 ubuntu.go:206] setting minikube options for container-runtime
I0110 09:13:09.692874 235056 config.go:182] Loaded profile config "embed-certs-070240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 09:13:09.692881 235056 machine.go:97] duration metric: took 3.798457289s to provisionDockerMachine
I0110 09:13:09.692888 235056 client.go:176] duration metric: took 9.186921227s to LocalClient.Create
I0110 09:13:09.692902 235056 start.go:167] duration metric: took 9.187022185s to libmachine.API.Create "embed-certs-070240"
I0110 09:13:09.692909 235056 start.go:293] postStartSetup for "embed-certs-070240" (driver="docker")
I0110 09:13:09.692918 235056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0110 09:13:09.692969 235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0110 09:13:09.693024 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:09.716823 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:09.827652 235056 ssh_runner.go:195] Run: cat /etc/os-release
I0110 09:13:09.831222 235056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0110 09:13:09.831247 235056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I0110 09:13:09.831258 235056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
I0110 09:13:09.831313 235056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
I0110 09:13:09.831410 235056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
I0110 09:13:09.831514 235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0110 09:13:09.839554 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
I0110 09:13:09.857921 235056 start.go:296] duration metric: took 164.997987ms for postStartSetup
I0110 09:13:09.858300 235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
I0110 09:13:09.875458 235056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json ...
I0110 09:13:09.875759 235056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0110 09:13:09.875816 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:09.893508 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:09.996282 235056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0110 09:13:10.000965 235056 start.go:128] duration metric: took 9.499201127s to createHost
I0110 09:13:10.001040 235056 start.go:83] releasing machines lock for "embed-certs-070240", held for 9.499392491s
I0110 09:13:10.001147 235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
I0110 09:13:10.018146 235056 ssh_runner.go:195] Run: cat /version.json
I0110 09:13:10.018208 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:10.018508 235056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0110 09:13:10.018572 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:10.039826 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:10.054582 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:10.250898 235056 ssh_runner.go:195] Run: systemctl --version
I0110 09:13:10.257525 235056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0110 09:13:10.262233 235056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0110 09:13:10.262347 235056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0110 09:13:10.289787 235056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I0110 09:13:10.289817 235056 start.go:496] detecting cgroup driver to use...
I0110 09:13:10.289874 235056 detect.go:175] detected "cgroupfs" cgroup driver on host os
I0110 09:13:10.289957 235056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0110 09:13:10.305610 235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0110 09:13:10.318921 235056 docker.go:218] disabling cri-docker service (if available) ...
I0110 09:13:10.319006 235056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0110 09:13:10.337214 235056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0110 09:13:10.356870 235056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0110 09:13:10.489153 235056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0110 09:13:10.618231 235056 docker.go:234] disabling docker service ...
I0110 09:13:10.618348 235056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0110 09:13:10.640954 235056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0110 09:13:10.654875 235056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0110 09:13:10.781345 235056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0110 09:13:10.907663 235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0110 09:13:10.921239 235056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0110 09:13:10.936750 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0110 09:13:10.946292 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0110 09:13:10.955641 235056 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I0110 09:13:10.955732 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0110 09:13:10.965181 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 09:13:10.974777 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0110 09:13:10.984160 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0110 09:13:10.993414 235056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0110 09:13:11.002277 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0110 09:13:11.011552 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0110 09:13:11.020731 235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0110 09:13:11.032164 235056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0110 09:13:11.041266 235056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0110 09:13:11.049538 235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 09:13:11.168198 235056 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0110 09:13:11.316082 235056 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I0110 09:13:11.316198 235056 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0110 09:13:11.320219 235056 start.go:574] Will wait 60s for crictl version
I0110 09:13:11.320342 235056 ssh_runner.go:195] Run: which crictl
I0110 09:13:11.324162 235056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I0110 09:13:11.348965 235056 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I0110 09:13:11.349038 235056 ssh_runner.go:195] Run: containerd --version
I0110 09:13:11.370593 235056 ssh_runner.go:195] Run: containerd --version
I0110 09:13:11.397066 235056 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I0110 09:13:11.399941 235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 09:13:11.417945 235056 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0110 09:13:11.422087 235056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 09:13:11.432146 235056 kubeadm.go:884] updating cluster {Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I0110 09:13:11.432265 235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 09:13:11.432334 235056 ssh_runner.go:195] Run: sudo crictl images --output json
I0110 09:13:11.457650 235056 containerd.go:635] all images are preloaded for containerd runtime.
I0110 09:13:11.457678 235056 containerd.go:542] Images already preloaded, skipping extraction
I0110 09:13:11.457739 235056 ssh_runner.go:195] Run: sudo crictl images --output json
I0110 09:13:11.482672 235056 containerd.go:635] all images are preloaded for containerd runtime.
I0110 09:13:11.482696 235056 cache_images.go:86] Images are preloaded, skipping loading
I0110 09:13:11.482705 235056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I0110 09:13:11.482801 235056 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-070240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0110 09:13:11.482880 235056 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I0110 09:13:11.509165 235056 cni.go:84] Creating CNI manager for ""
I0110 09:13:11.509191 235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0110 09:13:11.509209 235056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I0110 09:13:11.509237 235056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-070240 NodeName:embed-certs-070240 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0110 09:13:11.509365 235056 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-070240"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0110 09:13:11.509436 235056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I0110 09:13:11.517572 235056 binaries.go:51] Found k8s binaries, skipping transfer
I0110 09:13:11.517660 235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0110 09:13:11.525667 235056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0110 09:13:11.539457 235056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0110 09:13:11.553317 235056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
I0110 09:13:11.566470 235056 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0110 09:13:11.570101 235056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0110 09:13:11.579890 235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 09:13:11.696009 235056 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0110 09:13:11.712950 235056 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240 for IP: 192.168.76.2
I0110 09:13:11.712973 235056 certs.go:195] generating shared ca certs ...
I0110 09:13:11.712990 235056 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:11.713133 235056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
I0110 09:13:11.713182 235056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
I0110 09:13:11.713196 235056 certs.go:257] generating profile certs ...
I0110 09:13:11.713250 235056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key
I0110 09:13:11.713266 235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt with IP's: []
I0110 09:13:12.910441 235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt ...
I0110 09:13:12.910475 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt: {Name:mk2d9389004d811bee0bcc877ceae3ae60d37010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:12.910677 235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key ...
I0110 09:13:12.910690 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key: {Name:mk2b1f219c971b2eb1bc9dceb288ef6f57e6e435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:12.910788 235056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd
I0110 09:13:12.910803 235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0110 09:13:13.109971 235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd ...
I0110 09:13:13.110001 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd: {Name:mk2a7b42284bd924503f7c2e46ff2701108bcfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:13.110184 235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd ...
I0110 09:13:13.110198 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd: {Name:mk8800551c00db7229ba8254880432fdd5f179c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:13.110281 235056 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt
I0110 09:13:13.110363 235056 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key
I0110 09:13:13.110424 235056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key
I0110 09:13:13.110442 235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt with IP's: []
I0110 09:13:13.540851 235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt ...
I0110 09:13:13.540891 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt: {Name:mk43c326df22054de9ff9244dc3d225172273ca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:13.541090 235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key ...
I0110 09:13:13.541106 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key: {Name:mk870d0c307b657814b43ca961ebe34168a48094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:13.541293 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
W0110 09:13:13.541340 235056 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
I0110 09:13:13.541356 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
I0110 09:13:13.541385 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
I0110 09:13:13.541416 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
I0110 09:13:13.541445 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
I0110 09:13:13.541493 235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
I0110 09:13:13.542101 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0110 09:13:13.561952 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0110 09:13:13.580666 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0110 09:13:13.598633 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0110 09:13:13.616533 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0110 09:13:13.634498 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0110 09:13:13.652645 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0110 09:13:13.670048 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0110 09:13:13.687975 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
I0110 09:13:13.705676 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
I0110 09:13:13.723435 235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0110 09:13:13.740364 235056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I0110 09:13:13.753153 235056 ssh_runner.go:195] Run: openssl version
I0110 09:13:13.759208 235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
I0110 09:13:13.766493 235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
I0110 09:13:13.774135 235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
I0110 09:13:13.777904 235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
I0110 09:13:13.777969 235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
I0110 09:13:13.819373 235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I0110 09:13:13.827076 235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
I0110 09:13:13.834306 235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
I0110 09:13:13.841704 235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
I0110 09:13:13.849677 235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
I0110 09:13:13.853430 235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
I0110 09:13:13.853495 235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
I0110 09:13:13.893893 235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I0110 09:13:13.901563 235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
I0110 09:13:13.908947 235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I0110 09:13:13.916153 235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I0110 09:13:13.923284 235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0110 09:13:13.927113 235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
I0110 09:13:13.927209 235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0110 09:13:13.970696 235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I0110 09:13:13.978345 235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I0110 09:13:13.986564 235056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0110 09:13:13.990904 235056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0110 09:13:13.990954 235056 kubeadm.go:401] StartCluster: {Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I0110 09:13:13.991044 235056 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0110 09:13:13.991109 235056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0110 09:13:14.020938 235056 cri.go:96] found id: ""
I0110 09:13:14.021077 235056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0110 09:13:14.030776 235056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0110 09:13:14.039661 235056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I0110 09:13:14.039750 235056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0110 09:13:14.048317 235056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0110 09:13:14.048382 235056 kubeadm.go:158] found existing configuration files:
I0110 09:13:14.048485 235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0110 09:13:14.056355 235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0110 09:13:14.056422 235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0110 09:13:14.063756 235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0110 09:13:14.071469 235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0110 09:13:14.071569 235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0110 09:13:14.079069 235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0110 09:13:14.086589 235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0110 09:13:14.086684 235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0110 09:13:14.094127 235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0110 09:13:14.101697 235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0110 09:13:14.101759 235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0110 09:13:14.109134 235056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0110 09:13:14.144534 235056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I0110 09:13:14.144658 235056 kubeadm.go:319] [preflight] Running pre-flight checks
I0110 09:13:14.229150 235056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I0110 09:13:14.229293 235056 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I0110 09:13:14.229365 235056 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I0110 09:13:14.229454 235056 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0110 09:13:14.229545 235056 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0110 09:13:14.229629 235056 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0110 09:13:14.229712 235056 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0110 09:13:14.229795 235056 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0110 09:13:14.229883 235056 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0110 09:13:14.229974 235056 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0110 09:13:14.230055 235056 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0110 09:13:14.230135 235056 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0110 09:13:14.294765 235056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I0110 09:13:14.294911 235056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0110 09:13:14.295032 235056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0110 09:13:14.300901 235056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0110 09:13:14.307584 235056 out.go:252] - Generating certificates and keys ...
I0110 09:13:14.307738 235056 kubeadm.go:319] [certs] Using existing ca certificate authority
I0110 09:13:14.307835 235056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I0110 09:13:14.454524 235056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I0110 09:13:14.725137 235056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I0110 09:13:15.139043 235056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I0110 09:13:15.202772 235056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I0110 09:13:15.335767 235056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I0110 09:13:15.335971 235056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-070240 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0110 09:13:15.700029 235056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I0110 09:13:15.700662 235056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-070240 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0110 09:13:15.972725 235056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I0110 09:13:16.258452 235056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I0110 09:13:16.307955 235056 kubeadm.go:319] [certs] Generating "sa" key and public key
I0110 09:13:16.308275 235056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 09:13:16.728858 235056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I0110 09:13:16.889414 235056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0110 09:13:16.982623 235056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 09:13:17.129058 235056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 09:13:17.372006 235056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 09:13:17.372647 235056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 09:13:17.375259 235056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0110 09:13:17.378999 235056 out.go:252] - Booting up control plane ...
I0110 09:13:17.379102 235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 09:13:17.379181 235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 09:13:17.380148 235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 09:13:17.395884 235056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0110 09:13:17.396329 235056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I0110 09:13:17.404117 235056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I0110 09:13:17.404559 235056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0110 09:13:17.404805 235056 kubeadm.go:319] [kubelet-start] Starting the kubelet
I0110 09:13:17.533267 235056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0110 09:13:17.533418 235056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0110 09:13:18.534205 235056 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000968186s
I0110 09:13:18.537863 235056 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I0110 09:13:18.537965 235056 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I0110 09:13:18.538060 235056 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I0110 09:13:18.538165 235056 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I0110 09:13:20.551006 235056 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.01238558s
I0110 09:13:22.110493 235056 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.57265444s
I0110 09:13:24.040141 235056 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502035862s
I0110 09:13:24.076210 235056 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0110 09:13:24.095024 235056 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0110 09:13:24.113736 235056 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I0110 09:13:24.114216 235056 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-070240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0110 09:13:24.131459 235056 kubeadm.go:319] [bootstrap-token] Using token: 4u9dpa.2ia3kp786y1ddq78
I0110 09:13:24.134432 235056 out.go:252] - Configuring RBAC rules ...
I0110 09:13:24.134561 235056 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0110 09:13:24.142619 235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0110 09:13:24.156142 235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0110 09:13:24.162245 235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0110 09:13:24.170776 235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0110 09:13:24.176680 235056 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0110 09:13:24.447601 235056 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0110 09:13:24.889103 235056 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I0110 09:13:25.451545 235056 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I0110 09:13:25.453376 235056 kubeadm.go:319]
I0110 09:13:25.453449 235056 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I0110 09:13:25.453455 235056 kubeadm.go:319]
I0110 09:13:25.453555 235056 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I0110 09:13:25.453560 235056 kubeadm.go:319]
I0110 09:13:25.453586 235056 kubeadm.go:319] mkdir -p $HOME/.kube
I0110 09:13:25.453645 235056 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0110 09:13:25.453696 235056 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0110 09:13:25.453700 235056 kubeadm.go:319]
I0110 09:13:25.453756 235056 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I0110 09:13:25.453760 235056 kubeadm.go:319]
I0110 09:13:25.453808 235056 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I0110 09:13:25.453811 235056 kubeadm.go:319]
I0110 09:13:25.453864 235056 kubeadm.go:319] You should now deploy a pod network to the cluster.
I0110 09:13:25.453939 235056 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0110 09:13:25.454009 235056 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0110 09:13:25.454012 235056 kubeadm.go:319]
I0110 09:13:25.454097 235056 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I0110 09:13:25.454173 235056 kubeadm.go:319] and service account keys on each node and then running the following as root:
I0110 09:13:25.454177 235056 kubeadm.go:319]
I0110 09:13:25.454260 235056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4u9dpa.2ia3kp786y1ddq78 \
I0110 09:13:25.454363 235056 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:d698b7a2ca74d25eb75ad84fc365dd179d4946e37bd477a6d05d4b1a2fdc5a3c \
I0110 09:13:25.454383 235056 kubeadm.go:319] --control-plane
I0110 09:13:25.454386 235056 kubeadm.go:319]
I0110 09:13:25.454471 235056 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I0110 09:13:25.454475 235056 kubeadm.go:319]
I0110 09:13:25.454557 235056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4u9dpa.2ia3kp786y1ddq78 \
I0110 09:13:25.454658 235056 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:d698b7a2ca74d25eb75ad84fc365dd179d4946e37bd477a6d05d4b1a2fdc5a3c
I0110 09:13:25.457212 235056 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:13:25.457640 235056 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:13:25.457747 235056 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:13:25.457761 235056 cni.go:84] Creating CNI manager for ""
I0110 09:13:25.457769 235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0110 09:13:25.460734 235056 out.go:179] * Configuring CNI (Container Networking Interface) ...
I0110 09:13:25.463735 235056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0110 09:13:25.467792 235056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I0110 09:13:25.467809 235056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I0110 09:13:25.488456 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0110 09:13:25.816798 235056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0110 09:13:25.816933 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:25.817034 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-070240 minikube.k8s.io/updated_at=2026_01_10T09_13_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=embed-certs-070240 minikube.k8s.io/primary=true
I0110 09:13:25.980218 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:25.980284 235056 ops.go:34] apiserver oom_adj: -16
I0110 09:13:26.480394 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:26.980571 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:27.481130 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:27.980353 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:28.480468 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:28.980832 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:29.480360 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:29.980322 235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0110 09:13:30.143180 235056 kubeadm.go:1114] duration metric: took 4.326293232s to wait for elevateKubeSystemPrivileges
I0110 09:13:30.143221 235056 kubeadm.go:403] duration metric: took 16.152270365s to StartCluster
I0110 09:13:30.143239 235056 settings.go:142] acquiring lock: {Name:mkb2ebd5d087e1c54fbd873c70e4f039c6456e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:30.143310 235056 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22427-2439/kubeconfig
I0110 09:13:30.144396 235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/kubeconfig: {Name:mk140954996243c884fdf4f6dda6bc952a39b87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0110 09:13:30.144672 235056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0110 09:13:30.144766 235056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0110 09:13:30.145045 235056 config.go:182] Loaded profile config "embed-certs-070240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 09:13:30.145088 235056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0110 09:13:30.145166 235056 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-070240"
I0110 09:13:30.145189 235056 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-070240"
I0110 09:13:30.145215 235056 host.go:66] Checking if "embed-certs-070240" exists ...
I0110 09:13:30.145754 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:30.146329 235056 addons.go:70] Setting default-storageclass=true in profile "embed-certs-070240"
I0110 09:13:30.146355 235056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-070240"
I0110 09:13:30.146723 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:30.149274 235056 out.go:179] * Verifying Kubernetes components...
I0110 09:13:30.160402 235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0110 09:13:30.192062 235056 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0110 09:13:30.197921 235056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0110 09:13:30.197954 235056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0110 09:13:30.198019 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:30.200136 235056 addons.go:239] Setting addon default-storageclass=true in "embed-certs-070240"
I0110 09:13:30.200191 235056 host.go:66] Checking if "embed-certs-070240" exists ...
I0110 09:13:30.200635 235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
I0110 09:13:30.234511 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:30.246509 235056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I0110 09:13:30.246536 235056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0110 09:13:30.246600 235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
I0110 09:13:30.281038 235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
I0110 09:13:30.468391 235056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0110 09:13:30.468496 235056 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0110 09:13:30.500204 235056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0110 09:13:30.562336 235056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0110 09:13:30.901888 235056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-070240" to be "Ready" ...
I0110 09:13:30.902222 235056 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I0110 09:13:31.392928 235056 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I0110 09:13:31.395709 235056 addons.go:530] duration metric: took 1.250611329s for enable addons: enabled=[default-storageclass storage-provisioner]
I0110 09:13:31.405870 235056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-070240" context rescaled to 1 replicas
W0110 09:13:32.905078 235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
W0110 09:13:35.404431 235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
W0110 09:13:37.406693 235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
W0110 09:13:39.904753 235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
W0110 09:13:41.904979 235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
I0110 09:13:43.404607 235056 node_ready.go:49] node "embed-certs-070240" is "Ready"
I0110 09:13:43.404639 235056 node_ready.go:38] duration metric: took 12.502715303s for node "embed-certs-070240" to be "Ready" ...
I0110 09:13:43.404653 235056 api_server.go:52] waiting for apiserver process to appear ...
I0110 09:13:43.404724 235056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0110 09:13:43.417269 235056 api_server.go:72] duration metric: took 13.272558352s to wait for apiserver process to appear ...
I0110 09:13:43.417294 235056 api_server.go:88] waiting for apiserver healthz status ...
I0110 09:13:43.417312 235056 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0110 09:13:43.425684 235056 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
ok
I0110 09:13:43.426802 235056 api_server.go:141] control plane version: v1.35.0
I0110 09:13:43.426827 235056 api_server.go:131] duration metric: took 9.525816ms to wait for apiserver health ...
I0110 09:13:43.426836 235056 system_pods.go:43] waiting for kube-system pods to appear ...
I0110 09:13:43.435391 235056 system_pods.go:59] 8 kube-system pods found
I0110 09:13:43.435424 235056 system_pods.go:61] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0110 09:13:43.435432 235056 system_pods.go:61] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
I0110 09:13:43.435448 235056 system_pods.go:61] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
I0110 09:13:43.435453 235056 system_pods.go:61] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
I0110 09:13:43.435462 235056 system_pods.go:61] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
I0110 09:13:43.435467 235056 system_pods.go:61] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
I0110 09:13:43.435473 235056 system_pods.go:61] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
I0110 09:13:43.435479 235056 system_pods.go:61] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0110 09:13:43.435491 235056 system_pods.go:74] duration metric: took 8.648864ms to wait for pod list to return data ...
I0110 09:13:43.435500 235056 default_sa.go:34] waiting for default service account to be created ...
I0110 09:13:43.439320 235056 default_sa.go:45] found service account: "default"
I0110 09:13:43.439343 235056 default_sa.go:55] duration metric: took 3.832195ms for default service account to be created ...
I0110 09:13:43.439393 235056 system_pods.go:116] waiting for k8s-apps to be running ...
I0110 09:13:43.443923 235056 system_pods.go:86] 8 kube-system pods found
I0110 09:13:43.443997 235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0110 09:13:43.444019 235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
I0110 09:13:43.444042 235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
I0110 09:13:43.444076 235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
I0110 09:13:43.444103 235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
I0110 09:13:43.444126 235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
I0110 09:13:43.444147 235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
I0110 09:13:43.444180 235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0110 09:13:43.444226 235056 retry.go:84] will retry after 300ms: missing components: kube-dns
I0110 09:13:43.729092 235056 system_pods.go:86] 8 kube-system pods found
I0110 09:13:43.729180 235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0110 09:13:43.729201 235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
I0110 09:13:43.729241 235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
I0110 09:13:43.729265 235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
I0110 09:13:43.729292 235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
I0110 09:13:43.729311 235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
I0110 09:13:43.729342 235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
I0110 09:13:43.729368 235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0110 09:13:44.012729 235056 system_pods.go:86] 8 kube-system pods found
I0110 09:13:44.012809 235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0110 09:13:44.012836 235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
I0110 09:13:44.012876 235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
I0110 09:13:44.012898 235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
I0110 09:13:44.012917 235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
I0110 09:13:44.012936 235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
I0110 09:13:44.012956 235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
I0110 09:13:44.012988 235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0110 09:13:44.013017 235056 system_pods.go:126] duration metric: took 573.615542ms to wait for k8s-apps to be running ...
I0110 09:13:44.013042 235056 system_svc.go:44] waiting for kubelet service to be running ....
I0110 09:13:44.013128 235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0110 09:13:44.050285 235056 system_svc.go:56] duration metric: took 37.233788ms WaitForService to wait for kubelet
I0110 09:13:44.050407 235056 kubeadm.go:587] duration metric: took 13.90568046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0110 09:13:44.050456 235056 node_conditions.go:102] verifying NodePressure condition ...
I0110 09:13:44.069413 235056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0110 09:13:44.069485 235056 node_conditions.go:123] node cpu capacity is 2
I0110 09:13:44.069514 235056 node_conditions.go:105] duration metric: took 19.032268ms to run NodePressure ...
I0110 09:13:44.069543 235056 start.go:242] waiting for startup goroutines ...
I0110 09:13:44.069575 235056 start.go:247] waiting for cluster config update ...
I0110 09:13:44.069603 235056 start.go:256] writing updated cluster config ...
I0110 09:13:44.069900 235056 ssh_runner.go:195] Run: rm -f paused
I0110 09:13:44.073982 235056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0110 09:13:44.096622 235056 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6tr7h" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.103780 235056 pod_ready.go:94] pod "coredns-7d764666f9-6tr7h" is "Ready"
I0110 09:13:44.103856 235056 pod_ready.go:86] duration metric: took 7.164071ms for pod "coredns-7d764666f9-6tr7h" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.106863 235056 pod_ready.go:83] waiting for pod "etcd-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.112867 235056 pod_ready.go:94] pod "etcd-embed-certs-070240" is "Ready"
I0110 09:13:44.112940 235056 pod_ready.go:86] duration metric: took 6.004422ms for pod "etcd-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.115833 235056 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.121113 235056 pod_ready.go:94] pod "kube-apiserver-embed-certs-070240" is "Ready"
I0110 09:13:44.121182 235056 pod_ready.go:86] duration metric: took 5.28827ms for pod "kube-apiserver-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.124152 235056 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.478177 235056 pod_ready.go:94] pod "kube-controller-manager-embed-certs-070240" is "Ready"
I0110 09:13:44.478204 235056 pod_ready.go:86] duration metric: took 353.989503ms for pod "kube-controller-manager-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:44.678525 235056 pod_ready.go:83] waiting for pod "kube-proxy-txqld" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:45.092574 235056 pod_ready.go:94] pod "kube-proxy-txqld" is "Ready"
I0110 09:13:45.092609 235056 pod_ready.go:86] duration metric: took 414.058387ms for pod "kube-proxy-txqld" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:45.280131 235056 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:45.679759 235056 pod_ready.go:94] pod "kube-scheduler-embed-certs-070240" is "Ready"
I0110 09:13:45.679840 235056 pod_ready.go:86] duration metric: took 399.603154ms for pod "kube-scheduler-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
I0110 09:13:45.679870 235056 pod_ready.go:40] duration metric: took 1.605787632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I0110 09:13:45.757621 235056 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I0110 09:13:45.760895 235056 out.go:203]
W0110 09:13:45.763879 235056 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
I0110 09:13:45.766911 235056 out.go:179] - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
I0110 09:13:45.770720 235056 out.go:179] * Done! kubectl is now configured to use "embed-certs-070240" cluster and "default" namespace by default
I0110 09:13:52.839263 209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001081313s
I0110 09:13:52.839294 209870 kubeadm.go:319]
I0110 09:13:52.839378 209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I0110 09:13:52.839416 209870 kubeadm.go:319] - The kubelet is not running
I0110 09:13:52.839522 209870 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0110 09:13:52.839531 209870 kubeadm.go:319]
I0110 09:13:52.839635 209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0110 09:13:52.839666 209870 kubeadm.go:319] - 'systemctl status kubelet'
I0110 09:13:52.839697 209870 kubeadm.go:319] - 'journalctl -xeu kubelet'
I0110 09:13:52.839701 209870 kubeadm.go:319]
I0110 09:13:52.844761 209870 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I0110 09:13:52.845168 209870 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I0110 09:13:52.845278 209870 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 09:13:52.845530 209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0110 09:13:52.845541 209870 kubeadm.go:319]
I0110 09:13:52.845606 209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I0110 09:13:52.845665 209870 kubeadm.go:403] duration metric: took 8m6.505307114s to StartCluster
I0110 09:13:52.845717 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0110 09:13:52.845786 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I0110 09:13:52.876377 209870 cri.go:96] found id: ""
I0110 09:13:52.876416 209870 logs.go:282] 0 containers: []
W0110 09:13:52.876425 209870 logs.go:284] No container was found matching "kube-apiserver"
I0110 09:13:52.876432 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0110 09:13:52.876504 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I0110 09:13:52.902015 209870 cri.go:96] found id: ""
I0110 09:13:52.902038 209870 logs.go:282] 0 containers: []
W0110 09:13:52.902047 209870 logs.go:284] No container was found matching "etcd"
I0110 09:13:52.902055 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0110 09:13:52.902130 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I0110 09:13:52.930106 209870 cri.go:96] found id: ""
I0110 09:13:52.930126 209870 logs.go:282] 0 containers: []
W0110 09:13:52.930135 209870 logs.go:284] No container was found matching "coredns"
I0110 09:13:52.930141 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0110 09:13:52.930200 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I0110 09:13:52.962753 209870 cri.go:96] found id: ""
I0110 09:13:52.962779 209870 logs.go:282] 0 containers: []
W0110 09:13:52.962788 209870 logs.go:284] No container was found matching "kube-scheduler"
I0110 09:13:52.962794 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0110 09:13:52.962852 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I0110 09:13:52.988595 209870 cri.go:96] found id: ""
I0110 09:13:52.988621 209870 logs.go:282] 0 containers: []
W0110 09:13:52.988630 209870 logs.go:284] No container was found matching "kube-proxy"
I0110 09:13:52.988637 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0110 09:13:52.988699 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I0110 09:13:53.013852 209870 cri.go:96] found id: ""
I0110 09:13:53.013877 209870 logs.go:282] 0 containers: []
W0110 09:13:53.013886 209870 logs.go:284] No container was found matching "kube-controller-manager"
I0110 09:13:53.013893 209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0110 09:13:53.013952 209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I0110 09:13:53.043720 209870 cri.go:96] found id: ""
I0110 09:13:53.043743 209870 logs.go:282] 0 containers: []
W0110 09:13:53.043752 209870 logs.go:284] No container was found matching "kindnet"
I0110 09:13:53.043763 209870 logs.go:123] Gathering logs for kubelet ...
I0110 09:13:53.043775 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0110 09:13:53.104744 209870 logs.go:123] Gathering logs for dmesg ...
I0110 09:13:53.104780 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0110 09:13:53.118863 209870 logs.go:123] Gathering logs for describe nodes ...
I0110 09:13:53.118893 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0110 09:13:53.212815 209870 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:13:53.185939 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.187077 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.203692 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.206842 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.207578 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0110 09:13:53.185939 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.187077 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.203692 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.206842 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:53.207578 4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0110 09:13:53.212852 209870 logs.go:123] Gathering logs for containerd ...
I0110 09:13:53.212865 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0110 09:13:53.257937 209870 logs.go:123] Gathering logs for container status ...
I0110 09:13:53.257973 209870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0110 09:13:53.288259 209870 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.288311 209870 out.go:285] *
W0110 09:13:53.288366 209870 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.288383 209870 out.go:285] *
W0110 09:13:53.288643 209870 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0110 09:13:53.293744 209870 out.go:203]
W0110 09:13:53.295826 209870 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001081313s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W0110 09:13:53.295871 209870 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0110 09:13:53.295893 209870 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0110 09:13:53.298998 209870 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762674350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762686831Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762726790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762745942Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762759997Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762771829Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762782242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762793672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762806652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762837069Z" level=info msg="Connect containerd service"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.763117871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.763699232Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782146927Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782208787Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782239630Z" level=info msg="Start subscribing containerd event"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782290716Z" level=info msg="Start recovering state"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821359137Z" level=info msg="Start event monitor"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821550846Z" level=info msg="Start cni network conf syncer for default"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821617333Z" level=info msg="Start streaming server"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821684674Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821794764Z" level=info msg="runtime interface starting up..."
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821848829Z" level=info msg="starting plugins..."
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821911977Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Jan 10 09:05:44 force-systemd-flag-447307 systemd[1]: Started containerd.service - containerd container runtime.
Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.824569251Z" level=info msg="containerd successfully booted in 0.083945s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0110 09:13:54.680425 4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:54.681236 4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:54.682980 4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:54.683635 4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E0110 09:13:54.685255 4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Jan10 08:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015531] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.518244] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.036376] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.856143] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.640312] kauditd_printk_skb: 39 callbacks suppressed
[Jan10 08:20] hrtimer: interrupt took 13698190 ns
==> kernel <==
09:13:54 up 56 min, 0 user, load average: 2.08, 1.99, 1.91
Linux force-systemd-flag-447307 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Jan 10 09:13:51 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:52 force-systemd-flag-447307 kubelet[4713]: E0110 09:13:52.474338 4713 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:53 force-systemd-flag-447307 kubelet[4789]: E0110 09:13:53.252372 4789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:54 force-systemd-flag-447307 kubelet[4823]: E0110 09:13:54.007587 4823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 10 09:13:54 force-systemd-flag-447307 kubelet[4919]: E0110 09:13:54.753275 4919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-447307 -n force-systemd-flag-447307
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-447307 -n force-systemd-flag-447307: exit status 6 (376.559027ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0110 09:13:55.188880 238846 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-447307" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-447307" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-447307" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-447307
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-447307: (2.132879741s)
--- FAIL: TestForceSystemdFlag (503.98s)