=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m16.82059088s)
-- stdout --
* [force-systemd-flag-257442] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22352
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-257442" primary control-plane node in "force-systemd-flag-257442" cluster
* Pulling base image v0.0.48-1766884053-22351 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I1228 07:11:42.715378 202182 out.go:360] Setting OutFile to fd 1 ...
I1228 07:11:42.715558 202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:11:42.715590 202182 out.go:374] Setting ErrFile to fd 2...
I1228 07:11:42.715612 202182 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:11:42.715999 202182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 07:11:42.716697 202182 out.go:368] Setting JSON to false
I1228 07:11:42.718260 202182 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3253,"bootTime":1766902650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1228 07:11:42.718337 202182 start.go:143] virtualization:
I1228 07:11:42.722422 202182 out.go:179] * [force-systemd-flag-257442] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1228 07:11:42.725859 202182 notify.go:221] Checking for updates...
I1228 07:11:42.726417 202182 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:11:42.729863 202182 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:11:42.733034 202182 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
I1228 07:11:42.736198 202182 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
I1228 07:11:42.739620 202182 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1228 07:11:42.742650 202182 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:11:42.746164 202182 config.go:182] Loaded profile config "force-systemd-env-782848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 07:11:42.746308 202182 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:11:42.770870 202182 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1228 07:11:42.770972 202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:11:42.844310 202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.83443823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:11:42.844418 202182 docker.go:319] overlay module found
I1228 07:11:42.849492 202182 out.go:179] * Using the docker driver based on user configuration
I1228 07:11:42.852348 202182 start.go:309] selected driver: docker
I1228 07:11:42.852368 202182 start.go:928] validating driver "docker" against <nil>
I1228 07:11:42.852382 202182 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:11:42.853288 202182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:11:42.918090 202182 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:11:42.898066629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:11:42.918240 202182 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1228 07:11:42.918462 202182 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1228 07:11:42.921452 202182 out.go:179] * Using Docker driver with root privileges
I1228 07:11:42.924398 202182 cni.go:84] Creating CNI manager for ""
I1228 07:11:42.924520 202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1228 07:11:42.924534 202182 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1228 07:11:42.924614 202182 start.go:353] cluster config:
{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:11:42.927742 202182 out.go:179] * Starting "force-systemd-flag-257442" primary control-plane node in "force-systemd-flag-257442" cluster
I1228 07:11:42.930570 202182 cache.go:134] Beginning downloading kic base image for docker with containerd
I1228 07:11:42.933508 202182 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:11:42.936360 202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:11:42.936405 202182 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1228 07:11:42.936416 202182 cache.go:65] Caching tarball of preloaded images
I1228 07:11:42.936441 202182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:11:42.936533 202182 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1228 07:11:42.936546 202182 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1228 07:11:42.936653 202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
I1228 07:11:42.936673 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json: {Name:mk1bb575eaedf054a5c39231661ba5e51bfbfb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:42.955984 202182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:11:42.956009 202182 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:11:42.956029 202182 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:11:42.956060 202182 start.go:360] acquireMachinesLock for force-systemd-flag-257442: {Name:mk182766e2370865019edd04ffc6f7524c78e636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:11:42.956174 202182 start.go:364] duration metric: took 92.899µs to acquireMachinesLock for "force-systemd-flag-257442"
I1228 07:11:42.956203 202182 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1228 07:11:42.956270 202182 start.go:125] createHost starting for "" (driver="docker")
I1228 07:11:42.959751 202182 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1228 07:11:42.959984 202182 start.go:159] libmachine.API.Create for "force-systemd-flag-257442" (driver="docker")
I1228 07:11:42.960019 202182 client.go:173] LocalClient.Create starting
I1228 07:11:42.960087 202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
I1228 07:11:42.960128 202182 main.go:144] libmachine: Decoding PEM data...
I1228 07:11:42.960147 202182 main.go:144] libmachine: Parsing certificate...
I1228 07:11:42.960199 202182 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
I1228 07:11:42.960227 202182 main.go:144] libmachine: Decoding PEM data...
I1228 07:11:42.960242 202182 main.go:144] libmachine: Parsing certificate...
I1228 07:11:42.960646 202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 07:11:42.976005 202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 07:11:42.976085 202182 network_create.go:284] running [docker network inspect force-systemd-flag-257442] to gather additional debugging logs...
I1228 07:11:42.976106 202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442
W1228 07:11:42.991634 202182 cli_runner.go:211] docker network inspect force-systemd-flag-257442 returned with exit code 1
I1228 07:11:42.991665 202182 network_create.go:287] error running [docker network inspect force-systemd-flag-257442]: docker network inspect force-systemd-flag-257442: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-257442 not found
I1228 07:11:42.991678 202182 network_create.go:289] output of [docker network inspect force-systemd-flag-257442]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-257442 not found
** /stderr **
I1228 07:11:42.991788 202182 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:11:43.009147 202182 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
I1228 07:11:43.009450 202182 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
I1228 07:11:43.009714 202182 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
I1228 07:11:43.010021 202182 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-60444ab3ee70 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:84:9e:6e:bc:3d} reservation:<nil>}
I1228 07:11:43.010405 202182 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d72f0}
I1228 07:11:43.010426 202182 network_create.go:124] attempt to create docker network force-systemd-flag-257442 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1228 07:11:43.010488 202182 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-257442 force-systemd-flag-257442
I1228 07:11:43.066640 202182 network_create.go:108] docker network force-systemd-flag-257442 192.168.85.0/24 created
I1228 07:11:43.066670 202182 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-257442" container
I1228 07:11:43.066751 202182 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1228 07:11:43.086978 202182 cli_runner.go:164] Run: docker volume create force-systemd-flag-257442 --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true
I1228 07:11:43.106995 202182 oci.go:103] Successfully created a docker volume force-systemd-flag-257442
I1228 07:11:43.107086 202182 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-257442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --entrypoint /usr/bin/test -v force-systemd-flag-257442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
I1228 07:11:43.672034 202182 oci.go:107] Successfully prepared a docker volume force-systemd-flag-257442
I1228 07:11:43.672096 202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:11:43.672107 202182 kic.go:194] Starting extracting preloaded images to volume ...
I1228 07:11:43.672194 202182 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
I1228 07:11:47.619647 202182 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.947413254s)
I1228 07:11:47.619678 202182 kic.go:203] duration metric: took 3.947567208s to extract preloaded images to volume ...
W1228 07:11:47.619829 202182 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1228 07:11:47.619942 202182 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1228 07:11:47.682992 202182 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-257442 --name force-systemd-flag-257442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-257442 --network force-systemd-flag-257442 --ip 192.168.85.2 --volume force-systemd-flag-257442:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
I1228 07:11:47.987523 202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Running}}
I1228 07:11:48.014448 202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
I1228 07:11:48.042972 202182 cli_runner.go:164] Run: docker exec force-systemd-flag-257442 stat /var/lib/dpkg/alternatives/iptables
I1228 07:11:48.101378 202182 oci.go:144] the created container "force-systemd-flag-257442" has a running status.
I1228 07:11:48.101414 202182 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa...
I1228 07:11:48.675904 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1228 07:11:48.675956 202182 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1228 07:11:48.704271 202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
I1228 07:11:48.736793 202182 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1228 07:11:48.736819 202182 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-257442 chown docker:docker /home/docker/.ssh/authorized_keys]
I1228 07:11:48.804337 202182 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257442 --format={{.State.Status}}
I1228 07:11:48.834826 202182 machine.go:94] provisionDockerMachine start ...
I1228 07:11:48.834944 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:48.863393 202182 main.go:144] libmachine: Using SSH client type: native
I1228 07:11:48.863873 202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33045 <nil> <nil>}
I1228 07:11:48.863893 202182 main.go:144] libmachine: About to run SSH command:
hostname
I1228 07:11:49.032380 202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
I1228 07:11:49.032406 202182 ubuntu.go:182] provisioning hostname "force-systemd-flag-257442"
I1228 07:11:49.032540 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:49.052336 202182 main.go:144] libmachine: Using SSH client type: native
I1228 07:11:49.052665 202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33045 <nil> <nil>}
I1228 07:11:49.052682 202182 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-257442 && echo "force-systemd-flag-257442" | sudo tee /etc/hostname
I1228 07:11:49.213253 202182 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257442
I1228 07:11:49.213336 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:49.236648 202182 main.go:144] libmachine: Using SSH client type: native
I1228 07:11:49.236959 202182 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33045 <nil> <nil>}
I1228 07:11:49.236977 202182 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-257442' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-257442/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-257442' | sudo tee -a /etc/hosts;
fi
fi
I1228 07:11:49.397038 202182 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1228 07:11:49.397065 202182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
I1228 07:11:49.397085 202182 ubuntu.go:190] setting up certificates
I1228 07:11:49.397094 202182 provision.go:84] configureAuth start
I1228 07:11:49.397159 202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
I1228 07:11:49.420305 202182 provision.go:143] copyHostCerts
I1228 07:11:49.420345 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
I1228 07:11:49.420374 202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
I1228 07:11:49.420386 202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
I1228 07:11:49.420564 202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
I1228 07:11:49.420662 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
I1228 07:11:49.420680 202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
I1228 07:11:49.420685 202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
I1228 07:11:49.420715 202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
I1228 07:11:49.420761 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
I1228 07:11:49.420776 202182 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
I1228 07:11:49.420780 202182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
I1228 07:11:49.420805 202182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
I1228 07:11:49.420852 202182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-257442 san=[127.0.0.1 192.168.85.2 force-systemd-flag-257442 localhost minikube]
I1228 07:11:49.646258 202182 provision.go:177] copyRemoteCerts
I1228 07:11:49.646332 202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1228 07:11:49.646373 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:49.667681 202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
I1228 07:11:49.768622 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1228 07:11:49.768692 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1228 07:11:49.786043 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem -> /etc/docker/server.pem
I1228 07:11:49.786115 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1228 07:11:49.805713 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1228 07:11:49.805777 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1228 07:11:49.824117 202182 provision.go:87] duration metric: took 427.001952ms to configureAuth
I1228 07:11:49.824142 202182 ubuntu.go:206] setting minikube options for container-runtime
I1228 07:11:49.824330 202182 config.go:182] Loaded profile config "force-systemd-flag-257442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 07:11:49.824345 202182 machine.go:97] duration metric: took 989.496866ms to provisionDockerMachine
I1228 07:11:49.824352 202182 client.go:176] duration metric: took 6.864322529s to LocalClient.Create
I1228 07:11:49.824369 202182 start.go:167] duration metric: took 6.864385431s to libmachine.API.Create "force-systemd-flag-257442"
I1228 07:11:49.824377 202182 start.go:293] postStartSetup for "force-systemd-flag-257442" (driver="docker")
I1228 07:11:49.824385 202182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1228 07:11:49.824441 202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1228 07:11:49.824572 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:49.841697 202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
I1228 07:11:49.940326 202182 ssh_runner.go:195] Run: cat /etc/os-release
I1228 07:11:49.943423 202182 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1228 07:11:49.943449 202182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1228 07:11:49.943460 202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
I1228 07:11:49.943515 202182 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
I1228 07:11:49.943595 202182 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
I1228 07:11:49.943601 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /etc/ssl/certs/41952.pem
I1228 07:11:49.943695 202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1228 07:11:49.950748 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
I1228 07:11:49.976888 202182 start.go:296] duration metric: took 152.497114ms for postStartSetup
I1228 07:11:49.977259 202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
I1228 07:11:49.998212 202182 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/config.json ...
I1228 07:11:49.998522 202182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1228 07:11:49.998567 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:50.030466 202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
I1228 07:11:50.125879 202182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1228 07:11:50.131149 202182 start.go:128] duration metric: took 7.174863789s to createHost
I1228 07:11:50.131177 202182 start.go:83] releasing machines lock for "force-systemd-flag-257442", held for 7.174990436s
I1228 07:11:50.131248 202182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257442
I1228 07:11:50.148157 202182 ssh_runner.go:195] Run: cat /version.json
I1228 07:11:50.148166 202182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1228 07:11:50.148207 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:50.148236 202182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257442
I1228 07:11:50.172404 202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
I1228 07:11:50.174217 202182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/force-systemd-flag-257442/id_rsa Username:docker}
I1228 07:11:50.357542 202182 ssh_runner.go:195] Run: systemctl --version
I1228 07:11:50.363928 202182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1228 07:11:50.368163 202182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1228 07:11:50.368231 202182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1228 07:11:50.395201 202182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1228 07:11:50.395227 202182 start.go:496] detecting cgroup driver to use...
I1228 07:11:50.395241 202182 start.go:500] using "systemd" cgroup driver as enforced via flags
I1228 07:11:50.395299 202182 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1228 07:11:50.410474 202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1228 07:11:50.423445 202182 docker.go:218] disabling cri-docker service (if available) ...
I1228 07:11:50.423535 202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1228 07:11:50.440554 202182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1228 07:11:50.458778 202182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1228 07:11:50.577463 202182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1228 07:11:50.701377 202182 docker.go:234] disabling docker service ...
I1228 07:11:50.701466 202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1228 07:11:50.726518 202182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1228 07:11:50.741501 202182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1228 07:11:50.867242 202182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1228 07:11:50.974607 202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1228 07:11:50.987492 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1228 07:11:51.008605 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1228 07:11:51.019015 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1228 07:11:51.028781 202182 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1228 07:11:51.028861 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1228 07:11:51.038465 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:11:51.047159 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1228 07:11:51.055758 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:11:51.064984 202182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1228 07:11:51.072909 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1228 07:11:51.081912 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1228 07:11:51.090824 202182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1228 07:11:51.099899 202182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1228 07:11:51.107450 202182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1228 07:11:51.115067 202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:11:51.235548 202182 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1228 07:11:51.376438 202182 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1228 07:11:51.376630 202182 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1228 07:11:51.380725 202182 start.go:574] Will wait 60s for crictl version
I1228 07:11:51.380800 202182 ssh_runner.go:195] Run: which crictl
I1228 07:11:51.384409 202182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1228 07:11:51.409180 202182 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1228 07:11:51.409291 202182 ssh_runner.go:195] Run: containerd --version
I1228 07:11:51.430646 202182 ssh_runner.go:195] Run: containerd --version
I1228 07:11:51.460595 202182 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1228 07:11:51.463697 202182 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:11:51.480057 202182 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1228 07:11:51.484647 202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:11:51.495570 202182 kubeadm.go:884] updating cluster {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1228 07:11:51.495689 202182 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:11:51.495768 202182 ssh_runner.go:195] Run: sudo crictl images --output json
I1228 07:11:51.534723 202182 containerd.go:635] all images are preloaded for containerd runtime.
I1228 07:11:51.534803 202182 containerd.go:542] Images already preloaded, skipping extraction
I1228 07:11:51.534903 202182 ssh_runner.go:195] Run: sudo crictl images --output json
I1228 07:11:51.559789 202182 containerd.go:635] all images are preloaded for containerd runtime.
I1228 07:11:51.559808 202182 cache_images.go:86] Images are preloaded, skipping loading
I1228 07:11:51.559817 202182 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I1228 07:11:51.559914 202182 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-257442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1228 07:11:51.559976 202182 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1228 07:11:51.585689 202182 cni.go:84] Creating CNI manager for ""
I1228 07:11:51.585767 202182 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1228 07:11:51.585801 202182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1228 07:11:51.585861 202182 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-257442 NodeName:force-systemd-flag-257442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1228 07:11:51.586026 202182 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-257442"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1228 07:11:51.586150 202182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1228 07:11:51.594509 202182 binaries.go:51] Found k8s binaries, skipping transfer
I1228 07:11:51.594591 202182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1228 07:11:51.602306 202182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1228 07:11:51.614702 202182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1228 07:11:51.627096 202182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1228 07:11:51.639529 202182 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1228 07:11:51.643115 202182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:11:51.652078 202182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:11:51.778040 202182 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1228 07:11:51.796591 202182 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442 for IP: 192.168.85.2
I1228 07:11:51.796682 202182 certs.go:195] generating shared ca certs ...
I1228 07:11:51.796718 202182 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:51.796936 202182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
I1228 07:11:51.797027 202182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
I1228 07:11:51.797064 202182 certs.go:257] generating profile certs ...
I1228 07:11:51.797180 202182 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key
I1228 07:11:51.797224 202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt with IP's: []
I1228 07:11:52.013074 202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt ...
I1228 07:11:52.013118 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.crt: {Name:mk7aed4b1361cad35efdb364bf3318878e0ba011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.013324 202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key ...
I1228 07:11:52.013339 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/client.key: {Name:mk8ec5637167dd5ffdf85444ad06fe325864a279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.013439 202182 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be
I1228 07:11:52.013462 202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1228 07:11:52.367478 202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be ...
I1228 07:11:52.367511 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be: {Name:mkda9f7af1a3a08068bbee1ddd2a4b4ef4a9f820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.367692 202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be ...
I1228 07:11:52.367707 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be: {Name:mk045bbb68239d684b49be802faad160202aaf3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.367798 202182 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt
I1228 07:11:52.367875 202182 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key.67f743be -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key
I1228 07:11:52.367939 202182 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key
I1228 07:11:52.367956 202182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt with IP's: []
I1228 07:11:52.450774 202182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt ...
I1228 07:11:52.450804 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt: {Name:mkad6c1484d2eff4419d1163b5dc950a7aeb71a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.450986 202182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key ...
I1228 07:11:52.450999 202182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key: {Name:mk7ffb474cec5cc67e49a8a4a4b043205762d02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:11:52.451100 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1228 07:11:52.451122 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1228 07:11:52.451135 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1228 07:11:52.451157 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1228 07:11:52.451173 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1228 07:11:52.451198 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1228 07:11:52.451213 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1228 07:11:52.451224 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1228 07:11:52.451276 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
W1228 07:11:52.451317 202182 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
I1228 07:11:52.451330 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
I1228 07:11:52.451359 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
I1228 07:11:52.451383 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
I1228 07:11:52.451418 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
I1228 07:11:52.451466 202182 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
I1228 07:11:52.451500 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> /usr/share/ca-certificates/41952.pem
I1228 07:11:52.451519 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1228 07:11:52.451533 202182 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem -> /usr/share/ca-certificates/4195.pem
I1228 07:11:52.452048 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1228 07:11:52.470544 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1228 07:11:52.489878 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1228 07:11:52.510247 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1228 07:11:52.528132 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1228 07:11:52.545968 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1228 07:11:52.563355 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1228 07:11:52.580238 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/force-systemd-flag-257442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1228 07:11:52.598910 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
I1228 07:11:52.617614 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1228 07:11:52.636247 202182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
I1228 07:11:52.654304 202182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1228 07:11:52.667049 202182 ssh_runner.go:195] Run: openssl version
I1228 07:11:52.673735 202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
I1228 07:11:52.681295 202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
I1228 07:11:52.688626 202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
I1228 07:11:52.692403 202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
I1228 07:11:52.692584 202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
I1228 07:11:52.735038 202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1228 07:11:52.742898 202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
I1228 07:11:52.750466 202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1228 07:11:52.758067 202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1228 07:11:52.765682 202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1228 07:11:52.769877 202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
I1228 07:11:52.769968 202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1228 07:11:52.810873 202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1228 07:11:52.818298 202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1228 07:11:52.825860 202182 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
I1228 07:11:52.833320 202182 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
I1228 07:11:52.840574 202182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
I1228 07:11:52.844181 202182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
I1228 07:11:52.844245 202182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
I1228 07:11:52.885195 202182 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1228 07:11:52.893615 202182 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
I1228 07:11:52.900889 202182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1228 07:11:52.904539 202182 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1228 07:11:52.904638 202182 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-257442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-257442 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:11:52.904749 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W1228 07:11:52.915402 202182 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:11:52Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:11:52.915477 202182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1228 07:11:52.923486 202182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1228 07:11:52.931211 202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:11:52.931307 202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:11:52.939006 202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:11:52.939027 202182 kubeadm.go:158] found existing configuration files:
I1228 07:11:52.939087 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:11:52.946627 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:11:52.946691 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:11:52.954506 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:11:52.963900 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:11:52.963966 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:11:52.971542 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:11:52.979414 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:11:52.979485 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:11:52.986647 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:11:52.994899 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:11:52.995009 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:11:53.003577 202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:11:53.051927 202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:11:53.056727 202182 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:11:53.128709 202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:11:53.128782 202182 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:11:53.128818 202182 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:11:53.128866 202182 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:11:53.128914 202182 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:11:53.128962 202182 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:11:53.129012 202182 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:11:53.129062 202182 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:11:53.129111 202182 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:11:53.129156 202182 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:11:53.129205 202182 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:11:53.129251 202182 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:11:53.196911 202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:11:53.197098 202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:11:53.197193 202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:11:53.206716 202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:11:53.210202 202182 out.go:252] - Generating certificates and keys ...
I1228 07:11:53.210291 202182 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:11:53.210361 202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:11:53.342406 202182 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1228 07:11:53.807332 202182 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1228 07:11:54.152653 202182 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1228 07:11:54.360536 202182 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1228 07:11:54.510375 202182 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1228 07:11:54.510779 202182 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1228 07:11:54.630196 202182 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1228 07:11:54.630431 202182 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1228 07:11:55.093747 202182 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1228 07:11:55.202960 202182 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1228 07:11:55.357297 202182 kubeadm.go:319] [certs] Generating "sa" key and public key
I1228 07:11:55.357650 202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:11:55.557158 202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:11:55.707761 202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:11:55.947840 202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:11:56.066861 202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:11:56.190344 202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:11:56.190993 202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:11:56.193691 202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:11:56.197563 202182 out.go:252] - Booting up control plane ...
I1228 07:11:56.197679 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:11:56.197771 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:11:56.197847 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:11:56.216231 202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:11:56.216354 202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:11:56.223498 202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:11:56.224057 202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:11:56.224309 202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:11:56.359584 202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:11:56.359704 202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:15:56.359053 202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000118942s
I1228 07:15:56.359085 202182 kubeadm.go:319]
I1228 07:15:56.359144 202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:15:56.359183 202182 kubeadm.go:319] - The kubelet is not running
I1228 07:15:56.359292 202182 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:15:56.359301 202182 kubeadm.go:319]
I1228 07:15:56.359405 202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:15:56.359441 202182 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:15:56.359476 202182 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:15:56.359484 202182 kubeadm.go:319]
I1228 07:15:56.372655 202182 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:15:56.373414 202182 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:15:56.373650 202182 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:15:56.374256 202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1228 07:15:56.374302 202182 kubeadm.go:319]
I1228 07:15:56.374426 202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1228 07:15:56.374572 202182 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000118942s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257442 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000118942s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1228 07:15:56.374955 202182 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1228 07:15:56.816009 202182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1228 07:15:56.830130 202182 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:15:56.830189 202182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:15:56.839676 202182 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:15:56.839743 202182 kubeadm.go:158] found existing configuration files:
I1228 07:15:56.839818 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:15:56.848800 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:15:56.848913 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:15:56.858141 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:15:56.868016 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:15:56.868125 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:15:56.876557 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:15:56.886001 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:15:56.886129 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:15:56.894421 202182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:15:56.903733 202182 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:15:56.903858 202182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:15:56.912105 202182 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:15:56.973760 202182 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:15:56.974624 202182 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:15:57.076378 202182 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:15:57.076579 202182 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:15:57.076651 202182 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:15:57.076720 202182 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:15:57.076805 202182 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:15:57.076885 202182 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:15:57.076967 202182 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:15:57.077050 202182 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:15:57.077135 202182 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:15:57.077218 202182 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:15:57.077302 202182 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:15:57.077386 202182 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:15:57.173412 202182 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:15:57.173584 202182 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:15:57.173716 202182 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:15:57.193049 202182 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:15:57.196350 202182 out.go:252] - Generating certificates and keys ...
I1228 07:15:57.196587 202182 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:15:57.196675 202182 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:15:57.196779 202182 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1228 07:15:57.197830 202182 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1228 07:15:57.198363 202182 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1228 07:15:57.198849 202182 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1228 07:15:57.199374 202182 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1228 07:15:57.199787 202182 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1228 07:15:57.200352 202182 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1228 07:15:57.200860 202182 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1228 07:15:57.201385 202182 kubeadm.go:319] [certs] Using the existing "sa" key
I1228 07:15:57.201487 202182 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:15:57.595218 202182 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:15:57.831579 202182 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:15:58.069431 202182 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:15:58.608051 202182 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:15:58.960100 202182 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:15:58.960768 202182 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:15:58.963496 202182 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:15:58.967038 202182 out.go:252] - Booting up control plane ...
I1228 07:15:58.967133 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:15:58.967207 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:15:58.968494 202182 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:15:58.990175 202182 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:15:58.990624 202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:15:58.998239 202182 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:15:58.998885 202182 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:15:58.998948 202182 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:15:59.134789 202182 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:15:59.134903 202182 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:19:59.133520 202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000148247s
I1228 07:19:59.133544 202182 kubeadm.go:319]
I1228 07:19:59.133603 202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:19:59.133636 202182 kubeadm.go:319] - The kubelet is not running
I1228 07:19:59.134115 202182 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:19:59.134145 202182 kubeadm.go:319]
I1228 07:19:59.134503 202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:19:59.134568 202182 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:19:59.134623 202182 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:19:59.134628 202182 kubeadm.go:319]
I1228 07:19:59.139795 202182 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:19:59.140678 202182 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:19:59.141000 202182 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:19:59.142131 202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1228 07:19:59.142218 202182 kubeadm.go:319]
I1228 07:19:59.142358 202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:19:59.142429 202182 kubeadm.go:403] duration metric: took 8m6.237794878s to StartCluster
I1228 07:19:59.142536 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.154918 202182 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.154991 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.166191 202182 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.166259 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.177549 202182 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.177619 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.188550 202182 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.188622 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.199522 202182 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.199608 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.222184 202182 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.222259 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.238199 202182 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.238223 202182 logs.go:123] Gathering logs for containerd ...
I1228 07:19:59.238235 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1228 07:19:59.285575 202182 logs.go:123] Gathering logs for container status ...
I1228 07:19:59.285608 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1228 07:19:59.317760 202182 logs.go:123] Gathering logs for kubelet ...
I1228 07:19:59.317788 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:19:59.379482 202182 logs.go:123] Gathering logs for dmesg ...
I1228 07:19:59.379521 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:19:59.397974 202182 logs.go:123] Gathering logs for describe nodes ...
I1228 07:19:59.398001 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:19:59.472720 202182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:19:59.462854 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.463664 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.466781 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.467148 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.468708 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:19:59.462854 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.463664 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.466781 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.467148 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.468708 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1228 07:19:59.472790 202182 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.472889 202182 out.go:285] *
*
W1228 07:19:59.472973 202182 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.472991 202182 out.go:285] *
*
W1228 07:19:59.473250 202182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:19:59.480297 202182 out.go:203]
W1228 07:19:59.483295 202182 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.483369 202182 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:19:59.483394 202182 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:19:59.486670 202182 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-257442 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-257442 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:19:59.894768358 +0000 UTC m=+3122.546597803
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-257442
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-257442:
-- stdout --
[
{
"Id": "df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed",
"Created": "2025-12-28T07:11:47.699128468Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 202616,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-28T07:11:47.761413388Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
"ResolvConfPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/hostname",
"HostsPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/hosts",
"LogPath": "/var/lib/docker/containers/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed/df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed-json.log",
"Name": "/force-systemd-flag-257442",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-257442:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-257442",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "df7d2cc9f5a10e8df9d76fbcd87a441709d6dcc1f1bab89960f07c33153d4eed",
"LowerDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c-init/diff:/var/lib/docker/overlay2/0d0da319aa3bf2f05533a9c9285b57705aab73f2ff1fd705901f29c2d4464ccd/diff",
"MergedDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/merged",
"UpperDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/diff",
"WorkDir": "/var/lib/docker/overlay2/ec2e191b50ac3af46a83196265bb944a237bd849a13dbfb1dfcaa50908665f5c/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-257442",
"Source": "/var/lib/docker/volumes/force-systemd-flag-257442/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-257442",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-257442",
"name.minikube.sigs.k8s.io": "force-systemd-flag-257442",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "749b572e0b759ff76bd21e42cc7c467a75cdc4cadfbd58ed0720a6113433b82b",
"SandboxKey": "/var/run/docker/netns/749b572e0b75",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33045"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33046"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33049"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33047"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33048"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-257442": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "7a:84:78:58:5b:17",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "f65e8822beda8f345fed2fac182d65f1a5b5f1057db521193b1e64cea0af58c2",
"EndpointID": "74a46310530e51cd6b12cd1d4107d49ae137a2581b6bf9e7941c933a0f817d14",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-257442",
"df7d2cc9f5a1"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257442 -n force-systemd-flag-257442
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257442 -n force-systemd-flag-257442: exit status 6 (682.671844ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:20:00.565869 231453 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257442" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-257442 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ ssh │ cert-options-913529 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt │ cert-options-913529 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
│ ssh │ -p cert-options-913529 -- sudo cat /etc/kubernetes/admin.conf │ cert-options-913529 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
│ delete │ -p cert-options-913529 │ cert-options-913529 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
│ start │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:15 UTC │
│ addons │ enable metrics-server -p old-k8s-version-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
│ stop │ -p old-k8s-version-251758 --alsologtostderr -v=3 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
│ addons │ enable dashboard -p old-k8s-version-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:15 UTC │
│ start │ -p old-k8s-version-251758 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:15 UTC │ 28 Dec 25 07:16 UTC │
│ image │ old-k8s-version-251758 image list --format=json │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
│ pause │ -p old-k8s-version-251758 --alsologtostderr -v=1 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
│ unpause │ -p old-k8s-version-251758 --alsologtostderr -v=1 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
│ delete │ -p old-k8s-version-251758 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
│ delete │ -p old-k8s-version-251758 │ old-k8s-version-251758 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:16 UTC │
│ start │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:16 UTC │ 28 Dec 25 07:17 UTC │
│ addons │ enable metrics-server -p no-preload-863373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:17 UTC │
│ stop │ -p no-preload-863373 --alsologtostderr -v=3 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:17 UTC │ 28 Dec 25 07:18 UTC │
│ addons │ enable dashboard -p no-preload-863373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
│ start │ -p no-preload-863373 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:18 UTC │ 28 Dec 25 07:18 UTC │
│ image │ no-preload-863373 image list --format=json │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
│ pause │ -p no-preload-863373 --alsologtostderr -v=1 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
│ unpause │ -p no-preload-863373 --alsologtostderr -v=1 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
│ delete │ -p no-preload-863373 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
│ delete │ -p no-preload-863373 │ no-preload-863373 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
│ start │ -p embed-certs-468470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-468470 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ │
│ ssh │ force-systemd-flag-257442 ssh cat /etc/containerd/config.toml │ force-systemd-flag-257442 │ jenkins │ v1.37.0 │ 28 Dec 25 07:19 UTC │ 28 Dec 25 07:19 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/28 07:19:15
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1228 07:19:15.855527 228126 out.go:360] Setting OutFile to fd 1 ...
I1228 07:19:15.855701 228126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:19:15.855731 228126 out.go:374] Setting ErrFile to fd 2...
I1228 07:19:15.855753 228126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 07:19:15.856019 228126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2380/.minikube/bin
I1228 07:19:15.856535 228126 out.go:368] Setting JSON to false
I1228 07:19:15.857395 228126 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3706,"bootTime":1766902650,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1228 07:19:15.857493 228126 start.go:143] virtualization:
I1228 07:19:15.861932 228126 out.go:179] * [embed-certs-468470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1228 07:19:15.866530 228126 out.go:179] - MINIKUBE_LOCATION=22352
I1228 07:19:15.866619 228126 notify.go:221] Checking for updates...
I1228 07:19:15.870626 228126 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1228 07:19:15.873924 228126 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22352-2380/kubeconfig
I1228 07:19:15.877084 228126 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2380/.minikube
I1228 07:19:15.880271 228126 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1228 07:19:15.883516 228126 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1228 07:19:15.887232 228126 config.go:182] Loaded profile config "force-systemd-flag-257442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 07:19:15.887398 228126 driver.go:422] Setting default libvirt URI to qemu:///system
I1228 07:19:15.920930 228126 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1228 07:19:15.921049 228126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:19:15.993776 228126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:19:15.982927547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:19:15.993882 228126 docker.go:319] overlay module found
I1228 07:19:15.999413 228126 out.go:179] * Using the docker driver based on user configuration
I1228 07:19:16.002624 228126 start.go:309] selected driver: docker
I1228 07:19:16.002656 228126 start.go:928] validating driver "docker" against <nil>
I1228 07:19:16.002671 228126 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1228 07:19:16.003491 228126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1228 07:19:16.078708 228126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:19:16.068288874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1228 07:19:16.078855 228126 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1228 07:19:16.079069 228126 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1228 07:19:16.082125 228126 out.go:179] * Using Docker driver with root privileges
I1228 07:19:16.085153 228126 cni.go:84] Creating CNI manager for ""
I1228 07:19:16.085226 228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1228 07:19:16.085242 228126 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1228 07:19:16.085336 228126 start.go:353] cluster config:
{Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:19:16.088491 228126 out.go:179] * Starting "embed-certs-468470" primary control-plane node in "embed-certs-468470" cluster
I1228 07:19:16.091359 228126 cache.go:134] Beginning downloading kic base image for docker with containerd
I1228 07:19:16.094407 228126 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
I1228 07:19:16.097395 228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:19:16.097444 228126 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1228 07:19:16.097467 228126 cache.go:65] Caching tarball of preloaded images
I1228 07:19:16.097467 228126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
I1228 07:19:16.097546 228126 preload.go:251] Found /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1228 07:19:16.097556 228126 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1228 07:19:16.097657 228126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json ...
I1228 07:19:16.097681 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json: {Name:mk9b95fe4d627fe34aac6746b83e81a6d6cc5dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:16.116788 228126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
I1228 07:19:16.116808 228126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
I1228 07:19:16.116825 228126 cache.go:243] Successfully downloaded all kic artifacts
I1228 07:19:16.116855 228126 start.go:360] acquireMachinesLock for embed-certs-468470: {Name:mke430c2aaf951f831e2ac8aaeccff9516da0ba2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1228 07:19:16.116957 228126 start.go:364] duration metric: took 83.061µs to acquireMachinesLock for "embed-certs-468470"
I1228 07:19:16.116988 228126 start.go:93] Provisioning new machine with config: &{Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1228 07:19:16.117062 228126 start.go:125] createHost starting for "" (driver="docker")
I1228 07:19:16.120406 228126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1228 07:19:16.120647 228126 start.go:159] libmachine.API.Create for "embed-certs-468470" (driver="docker")
I1228 07:19:16.120685 228126 client.go:173] LocalClient.Create starting
I1228 07:19:16.120747 228126 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem
I1228 07:19:16.120785 228126 main.go:144] libmachine: Decoding PEM data...
I1228 07:19:16.120808 228126 main.go:144] libmachine: Parsing certificate...
I1228 07:19:16.120867 228126 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem
I1228 07:19:16.120889 228126 main.go:144] libmachine: Decoding PEM data...
I1228 07:19:16.120900 228126 main.go:144] libmachine: Parsing certificate...
I1228 07:19:16.121248 228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 07:19:16.137735 228126 cli_runner.go:211] docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 07:19:16.137815 228126 network_create.go:284] running [docker network inspect embed-certs-468470] to gather additional debugging logs...
I1228 07:19:16.137834 228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470
W1228 07:19:16.152355 228126 cli_runner.go:211] docker network inspect embed-certs-468470 returned with exit code 1
I1228 07:19:16.152436 228126 network_create.go:287] error running [docker network inspect embed-certs-468470]: docker network inspect embed-certs-468470: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-468470 not found
I1228 07:19:16.152502 228126 network_create.go:289] output of [docker network inspect embed-certs-468470]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-468470 not found
** /stderr **
I1228 07:19:16.152603 228126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:19:16.168855 228126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0cde5aa00dd2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:fe:5c:61:4e:40} reservation:<nil>}
I1228 07:19:16.169179 228126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7076eb593482 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:28:2e:88:b4:01} reservation:<nil>}
I1228 07:19:16.169493 228126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30438d931074 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:10:11:ea:ef:c7} reservation:<nil>}
I1228 07:19:16.169906 228126 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e94f0}
I1228 07:19:16.169933 228126 network_create.go:124] attempt to create docker network embed-certs-468470 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1228 07:19:16.169990 228126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-468470 embed-certs-468470
I1228 07:19:16.222201 228126 network_create.go:108] docker network embed-certs-468470 192.168.76.0/24 created
I1228 07:19:16.222236 228126 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-468470" container
I1228 07:19:16.222325 228126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1228 07:19:16.238622 228126 cli_runner.go:164] Run: docker volume create embed-certs-468470 --label name.minikube.sigs.k8s.io=embed-certs-468470 --label created_by.minikube.sigs.k8s.io=true
I1228 07:19:16.255856 228126 oci.go:103] Successfully created a docker volume embed-certs-468470
I1228 07:19:16.255937 228126 cli_runner.go:164] Run: docker run --rm --name embed-certs-468470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468470 --entrypoint /usr/bin/test -v embed-certs-468470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
I1228 07:19:16.780750 228126 oci.go:107] Successfully prepared a docker volume embed-certs-468470
I1228 07:19:16.780809 228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:19:16.780819 228126 kic.go:194] Starting extracting preloaded images to volume ...
I1228 07:19:16.780881 228126 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-468470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
I1228 07:19:20.632070 228126 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2380/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-468470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.851149642s)
I1228 07:19:20.632105 228126 kic.go:203] duration metric: took 3.851281533s to extract preloaded images to volume ...
W1228 07:19:20.632236 228126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1228 07:19:20.632354 228126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1228 07:19:20.685476 228126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-468470 --name embed-certs-468470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-468470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-468470 --network embed-certs-468470 --ip 192.168.76.2 --volume embed-certs-468470:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
I1228 07:19:21.004837 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Running}}
I1228 07:19:21.025928 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:21.052728 228126 cli_runner.go:164] Run: docker exec embed-certs-468470 stat /var/lib/dpkg/alternatives/iptables
I1228 07:19:21.107301 228126 oci.go:144] the created container "embed-certs-468470" has a running status.
I1228 07:19:21.107330 228126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa...
I1228 07:19:21.206855 228126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1228 07:19:21.230585 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:21.260868 228126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1228 07:19:21.260885 228126 kic_runner.go:114] Args: [docker exec --privileged embed-certs-468470 chown docker:docker /home/docker/.ssh/authorized_keys]
I1228 07:19:21.321021 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:21.345509 228126 machine.go:94] provisionDockerMachine start ...
I1228 07:19:21.345596 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:21.365232 228126 main.go:144] libmachine: Using SSH client type: native
I1228 07:19:21.366170 228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33075 <nil> <nil>}
I1228 07:19:21.366189 228126 main.go:144] libmachine: About to run SSH command:
hostname
I1228 07:19:21.373060 228126 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47780->127.0.0.1:33075: read: connection reset by peer
I1228 07:19:24.507813 228126 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-468470
I1228 07:19:24.507840 228126 ubuntu.go:182] provisioning hostname "embed-certs-468470"
I1228 07:19:24.507904 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:24.525322 228126 main.go:144] libmachine: Using SSH client type: native
I1228 07:19:24.525629 228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33075 <nil> <nil>}
I1228 07:19:24.525645 228126 main.go:144] libmachine: About to run SSH command:
sudo hostname embed-certs-468470 && echo "embed-certs-468470" | sudo tee /etc/hostname
I1228 07:19:24.669369 228126 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-468470
I1228 07:19:24.669442 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:24.686412 228126 main.go:144] libmachine: Using SSH client type: native
I1228 07:19:24.686718 228126 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33075 <nil> <nil>}
I1228 07:19:24.686734 228126 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-468470' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468470/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-468470' | sudo tee -a /etc/hosts;
fi
fi
I1228 07:19:24.821357 228126 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1228 07:19:24.821380 228126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2380/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2380/.minikube}
I1228 07:19:24.821406 228126 ubuntu.go:190] setting up certificates
I1228 07:19:24.821416 228126 provision.go:84] configureAuth start
I1228 07:19:24.821473 228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
I1228 07:19:24.843887 228126 provision.go:143] copyHostCerts
I1228 07:19:24.843970 228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem, removing ...
I1228 07:19:24.843983 228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem
I1228 07:19:24.844098 228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/ca.pem (1082 bytes)
I1228 07:19:24.844210 228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem, removing ...
I1228 07:19:24.844216 228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem
I1228 07:19:24.844250 228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/cert.pem (1123 bytes)
I1228 07:19:24.844326 228126 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem, removing ...
I1228 07:19:24.844341 228126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem
I1228 07:19:24.844376 228126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2380/.minikube/key.pem (1679 bytes)
I1228 07:19:24.844446 228126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468470 san=[127.0.0.1 192.168.76.2 embed-certs-468470 localhost minikube]
I1228 07:19:24.940479 228126 provision.go:177] copyRemoteCerts
I1228 07:19:24.940563 228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1228 07:19:24.940654 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:24.959228 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:25.068416 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1228 07:19:25.086084 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1228 07:19:25.103550 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1228 07:19:25.120390 228126 provision.go:87] duration metric: took 298.960917ms to configureAuth
I1228 07:19:25.120421 228126 ubuntu.go:206] setting minikube options for container-runtime
I1228 07:19:25.120645 228126 config.go:182] Loaded profile config "embed-certs-468470": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 07:19:25.120661 228126 machine.go:97] duration metric: took 3.775133456s to provisionDockerMachine
I1228 07:19:25.120669 228126 client.go:176] duration metric: took 8.999974108s to LocalClient.Create
I1228 07:19:25.120687 228126 start.go:167] duration metric: took 9.000040488s to libmachine.API.Create "embed-certs-468470"
I1228 07:19:25.120695 228126 start.go:293] postStartSetup for "embed-certs-468470" (driver="docker")
I1228 07:19:25.120707 228126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1228 07:19:25.120762 228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1228 07:19:25.120805 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:25.138516 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:25.236642 228126 ssh_runner.go:195] Run: cat /etc/os-release
I1228 07:19:25.239796 228126 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1228 07:19:25.239822 228126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1228 07:19:25.239833 228126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/addons for local assets ...
I1228 07:19:25.239885 228126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2380/.minikube/files for local assets ...
I1228 07:19:25.239971 228126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem -> 41952.pem in /etc/ssl/certs
I1228 07:19:25.240078 228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1228 07:19:25.247453 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /etc/ssl/certs/41952.pem (1708 bytes)
I1228 07:19:25.264336 228126 start.go:296] duration metric: took 143.623195ms for postStartSetup
I1228 07:19:25.264751 228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
I1228 07:19:25.281462 228126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/config.json ...
I1228 07:19:25.281766 228126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1228 07:19:25.281817 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:25.298041 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:25.393255 228126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1228 07:19:25.397701 228126 start.go:128] duration metric: took 9.280616658s to createHost
I1228 07:19:25.397773 228126 start.go:83] releasing machines lock for "embed-certs-468470", held for 9.280801939s
I1228 07:19:25.397873 228126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-468470
I1228 07:19:25.415714 228126 ssh_runner.go:195] Run: cat /version.json
I1228 07:19:25.415760 228126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1228 07:19:25.415770 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:25.415815 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:25.437516 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:25.438133 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:25.532052 228126 ssh_runner.go:195] Run: systemctl --version
I1228 07:19:25.621277 228126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1228 07:19:25.625520 228126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1228 07:19:25.625593 228126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1228 07:19:25.652436 228126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1228 07:19:25.652480 228126 start.go:496] detecting cgroup driver to use...
I1228 07:19:25.652529 228126 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1228 07:19:25.652598 228126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1228 07:19:25.667425 228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1228 07:19:25.679845 228126 docker.go:218] disabling cri-docker service (if available) ...
I1228 07:19:25.679953 228126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1228 07:19:25.697058 228126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1228 07:19:25.715630 228126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1228 07:19:25.841178 228126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1228 07:19:25.965301 228126 docker.go:234] disabling docker service ...
I1228 07:19:25.965364 228126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1228 07:19:25.987529 228126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1228 07:19:26.005942 228126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1228 07:19:26.124496 228126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1228 07:19:26.232388 228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1228 07:19:26.244608 228126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1228 07:19:26.258117 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1228 07:19:26.266620 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1228 07:19:26.275111 228126 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I1228 07:19:26.275232 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1228 07:19:26.284232 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:19:26.292564 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1228 07:19:26.300953 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1228 07:19:26.309434 228126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1228 07:19:26.317183 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1228 07:19:26.325692 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1228 07:19:26.334482 228126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1228 07:19:26.343248 228126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1228 07:19:26.350679 228126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1228 07:19:26.357814 228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:19:26.468738 228126 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1228 07:19:26.608217 228126 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1228 07:19:26.608291 228126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1228 07:19:26.612366 228126 start.go:574] Will wait 60s for crictl version
I1228 07:19:26.612435 228126 ssh_runner.go:195] Run: which crictl
I1228 07:19:26.615732 228126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1228 07:19:26.639804 228126 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1228 07:19:26.639879 228126 ssh_runner.go:195] Run: containerd --version
I1228 07:19:26.658994 228126 ssh_runner.go:195] Run: containerd --version
I1228 07:19:26.682435 228126 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1228 07:19:26.685439 228126 cli_runner.go:164] Run: docker network inspect embed-certs-468470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 07:19:26.701161 228126 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1228 07:19:26.704989 228126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:19:26.714128 228126 kubeadm.go:884] updating cluster {Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1228 07:19:26.714240 228126 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1228 07:19:26.714310 228126 ssh_runner.go:195] Run: sudo crictl images --output json
I1228 07:19:26.745726 228126 containerd.go:635] all images are preloaded for containerd runtime.
I1228 07:19:26.745750 228126 containerd.go:542] Images already preloaded, skipping extraction
I1228 07:19:26.745808 228126 ssh_runner.go:195] Run: sudo crictl images --output json
I1228 07:19:26.770424 228126 containerd.go:635] all images are preloaded for containerd runtime.
I1228 07:19:26.770445 228126 cache_images.go:86] Images are preloaded, skipping loading
I1228 07:19:26.770453 228126 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I1228 07:19:26.770593 228126 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-468470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1228 07:19:26.770663 228126 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1228 07:19:26.798279 228126 cni.go:84] Creating CNI manager for ""
I1228 07:19:26.798304 228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1228 07:19:26.798320 228126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1228 07:19:26.798356 228126 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468470 NodeName:embed-certs-468470 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1228 07:19:26.798483 228126 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-468470"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1228 07:19:26.798553 228126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1228 07:19:26.806138 228126 binaries.go:51] Found k8s binaries, skipping transfer
I1228 07:19:26.806227 228126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1228 07:19:26.813664 228126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1228 07:19:26.826966 228126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1228 07:19:26.839537 228126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
I1228 07:19:26.852166 228126 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1228 07:19:26.855759 228126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1228 07:19:26.865783 228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:19:26.974833 228126 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1228 07:19:26.990153 228126 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470 for IP: 192.168.76.2
I1228 07:19:26.990170 228126 certs.go:195] generating shared ca certs ...
I1228 07:19:26.990185 228126 certs.go:227] acquiring lock for ca certs: {Name:mk867c51c31d3664751580ce57c19c8b4916033e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:26.990326 228126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key
I1228 07:19:26.990377 228126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key
I1228 07:19:26.990385 228126 certs.go:257] generating profile certs ...
I1228 07:19:26.990448 228126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key
I1228 07:19:26.990466 228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt with IP's: []
I1228 07:19:27.425811 228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt ...
I1228 07:19:27.425845 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.crt: {Name:mkb79580e8540dbbfaebd8ca79c423a035a96d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.426088 228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key ...
I1228 07:19:27.426104 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/client.key: {Name:mk9d9bf51638090ade7e9193ee7c1bf78591647c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.426239 228126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338
I1228 07:19:27.426260 228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1228 07:19:27.606180 228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 ...
I1228 07:19:27.606207 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338: {Name:mka1ece8130add6a9fa45d6969188597caff796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.606385 228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338 ...
I1228 07:19:27.606400 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338: {Name:mkd5cb9d9c7b4f9d06fef0319d1c296938643eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.606488 228126 certs.go:382] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt.b2b89338 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt
I1228 07:19:27.606570 228126 certs.go:386] copying /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key.b2b89338 -> /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key
I1228 07:19:27.606662 228126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key
I1228 07:19:27.606681 228126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt with IP's: []
I1228 07:19:27.931376 228126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt ...
I1228 07:19:27.931405 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt: {Name:mk9e597c4c024bbac614c08ef0919f65c7022cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.931585 228126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key ...
I1228 07:19:27.931598 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key: {Name:mk1d959289b8333386c68b4dcfec6e816455d42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:27.931790 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem (1338 bytes)
W1228 07:19:27.931835 228126 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195_empty.pem, impossibly tiny 0 bytes
I1228 07:19:27.931850 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca-key.pem (1675 bytes)
I1228 07:19:27.931878 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/ca.pem (1082 bytes)
I1228 07:19:27.931906 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/cert.pem (1123 bytes)
I1228 07:19:27.931932 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/certs/key.pem (1679 bytes)
I1228 07:19:27.931984 228126 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem (1708 bytes)
I1228 07:19:27.932574 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1228 07:19:27.954201 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1228 07:19:27.973843 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1228 07:19:27.993420 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1228 07:19:28.015789 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1228 07:19:28.038484 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1228 07:19:28.056187 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1228 07:19:28.073720 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/profiles/embed-certs-468470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1228 07:19:28.091830 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/certs/4195.pem --> /usr/share/ca-certificates/4195.pem (1338 bytes)
I1228 07:19:28.109629 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/files/etc/ssl/certs/41952.pem --> /usr/share/ca-certificates/41952.pem (1708 bytes)
I1228 07:19:28.126776 228126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2380/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1228 07:19:28.144645 228126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1228 07:19:28.157500 228126 ssh_runner.go:195] Run: openssl version
I1228 07:19:28.163759 228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4195.pem
I1228 07:19:28.171623 228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4195.pem /etc/ssl/certs/4195.pem
I1228 07:19:28.179055 228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4195.pem
I1228 07:19:28.182745 228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4195.pem
I1228 07:19:28.182814 228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4195.pem
I1228 07:19:28.223850 228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1228 07:19:28.231363 228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4195.pem /etc/ssl/certs/51391683.0
I1228 07:19:28.238828 228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41952.pem
I1228 07:19:28.246253 228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41952.pem /etc/ssl/certs/41952.pem
I1228 07:19:28.253868 228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41952.pem
I1228 07:19:28.257637 228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/41952.pem
I1228 07:19:28.257706 228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41952.pem
I1228 07:19:28.298516 228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1228 07:19:28.306116 228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41952.pem /etc/ssl/certs/3ec20f2e.0
I1228 07:19:28.313539 228126 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1228 07:19:28.320788 228126 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1228 07:19:28.328332 228126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1228 07:19:28.332223 228126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
I1228 07:19:28.332287 228126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1228 07:19:28.373381 228126 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1228 07:19:28.381041 228126 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1228 07:19:28.388558 228126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1228 07:19:28.392224 228126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1228 07:19:28.392276 228126 kubeadm.go:401] StartCluster: {Name:embed-certs-468470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-468470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1228 07:19:28.392401 228126 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W1228 07:19:28.403735 228126 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:28Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:28.403820 228126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1228 07:19:28.412127 228126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1228 07:19:28.420944 228126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1228 07:19:28.421025 228126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1228 07:19:28.428847 228126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1228 07:19:28.428881 228126 kubeadm.go:158] found existing configuration files:
I1228 07:19:28.428936 228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1228 07:19:28.437768 228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1228 07:19:28.437869 228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1228 07:19:28.445718 228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1228 07:19:28.453450 228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1228 07:19:28.453516 228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1228 07:19:28.461229 228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1228 07:19:28.469661 228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1228 07:19:28.469774 228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1228 07:19:28.477804 228126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1228 07:19:28.486440 228126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1228 07:19:28.486552 228126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1228 07:19:28.494320 228126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1228 07:19:28.542158 228126 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1228 07:19:28.542218 228126 kubeadm.go:319] [preflight] Running pre-flight checks
I1228 07:19:28.621508 228126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1228 07:19:28.621668 228126 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1228 07:19:28.621752 228126 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1228 07:19:28.621841 228126 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1228 07:19:28.621922 228126 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1228 07:19:28.622005 228126 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1228 07:19:28.622089 228126 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1228 07:19:28.622173 228126 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1228 07:19:28.622257 228126 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1228 07:19:28.622346 228126 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1228 07:19:28.622437 228126 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1228 07:19:28.622524 228126 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1228 07:19:28.705890 228126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1228 07:19:28.706005 228126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1228 07:19:28.706101 228126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1228 07:19:28.713035 228126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1228 07:19:28.719479 228126 out.go:252] - Generating certificates and keys ...
I1228 07:19:28.719580 228126 kubeadm.go:319] [certs] Using existing ca certificate authority
I1228 07:19:28.719656 228126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1228 07:19:29.007204 228126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1228 07:19:29.332150 228126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1228 07:19:29.561813 228126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1228 07:19:29.693999 228126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1228 07:19:29.980869 228126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1228 07:19:29.981292 228126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-468470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1228 07:19:30.078386 228126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1228 07:19:30.078847 228126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-468470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1228 07:19:30.436671 228126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1228 07:19:30.640145 228126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1228 07:19:30.829003 228126 kubeadm.go:319] [certs] Generating "sa" key and public key
I1228 07:19:30.829278 228126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1228 07:19:30.888274 228126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1228 07:19:30.964959 228126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1228 07:19:31.262278 228126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1228 07:19:31.748871 228126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1228 07:19:31.976397 228126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1228 07:19:31.976989 228126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1228 07:19:31.979621 228126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1228 07:19:31.983318 228126 out.go:252] - Booting up control plane ...
I1228 07:19:31.983424 228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1228 07:19:31.983502 228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1228 07:19:31.983569 228126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1228 07:19:31.999836 228126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1228 07:19:31.999959 228126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1228 07:19:32.008406 228126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1228 07:19:32.011860 228126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1228 07:19:32.012130 228126 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1228 07:19:32.146133 228126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1228 07:19:32.147734 228126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1228 07:19:33.150021 228126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00232577s
I1228 07:19:33.153484 228126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1228 07:19:33.153579 228126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I1228 07:19:33.153890 228126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1228 07:19:33.153984 228126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1228 07:19:35.162354 228126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.008492574s
I1228 07:19:36.735973 228126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.582474429s
I1228 07:19:38.656042 228126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502351991s
I1228 07:19:38.693604 228126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1228 07:19:38.720594 228126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1228 07:19:38.740108 228126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1228 07:19:38.740639 228126 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-468470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1228 07:19:38.754118 228126 kubeadm.go:319] [bootstrap-token] Using token: hmbmvb.uu6m3nlil2j14dzg
I1228 07:19:38.757114 228126 out.go:252] - Configuring RBAC rules ...
I1228 07:19:38.757239 228126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1228 07:19:38.762445 228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1228 07:19:38.773192 228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1228 07:19:38.784104 228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1228 07:19:38.789139 228126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1228 07:19:38.794519 228126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1228 07:19:39.065357 228126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1228 07:19:39.494566 228126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1228 07:19:40.063254 228126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1228 07:19:40.064622 228126 kubeadm.go:319]
I1228 07:19:40.064695 228126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1228 07:19:40.064701 228126 kubeadm.go:319]
I1228 07:19:40.064799 228126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1228 07:19:40.064820 228126 kubeadm.go:319]
I1228 07:19:40.064847 228126 kubeadm.go:319] mkdir -p $HOME/.kube
I1228 07:19:40.064910 228126 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1228 07:19:40.064969 228126 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1228 07:19:40.064975 228126 kubeadm.go:319]
I1228 07:19:40.065029 228126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1228 07:19:40.065033 228126 kubeadm.go:319]
I1228 07:19:40.065081 228126 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1228 07:19:40.065085 228126 kubeadm.go:319]
I1228 07:19:40.065137 228126 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1228 07:19:40.065219 228126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1228 07:19:40.065287 228126 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1228 07:19:40.065291 228126 kubeadm.go:319]
I1228 07:19:40.065376 228126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1228 07:19:40.065457 228126 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1228 07:19:40.065462 228126 kubeadm.go:319]
I1228 07:19:40.065547 228126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hmbmvb.uu6m3nlil2j14dzg \
I1228 07:19:40.065665 228126 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:245ab1e37d24b07cc412580775e400c938559b58f26292f1d84f87371e4e4a5f \
I1228 07:19:40.065686 228126 kubeadm.go:319] --control-plane
I1228 07:19:40.065690 228126 kubeadm.go:319]
I1228 07:19:40.065774 228126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1228 07:19:40.065778 228126 kubeadm.go:319]
I1228 07:19:40.065861 228126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hmbmvb.uu6m3nlil2j14dzg \
I1228 07:19:40.065963 228126 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:245ab1e37d24b07cc412580775e400c938559b58f26292f1d84f87371e4e4a5f
I1228 07:19:40.070135 228126 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:19:40.070585 228126 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:19:40.070702 228126 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:19:40.070723 228126 cni.go:84] Creating CNI manager for ""
I1228 07:19:40.070737 228126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1228 07:19:40.073889 228126 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1228 07:19:40.076807 228126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1228 07:19:40.081116 228126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I1228 07:19:40.081136 228126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I1228 07:19:40.094421 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1228 07:19:40.388171 228126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1228 07:19:40.388303 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:40.388385 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-468470 minikube.k8s.io/updated_at=2025_12_28T07_19_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a9d18bae8c1fce4e804f90745897ed87020e8dba minikube.k8s.io/name=embed-certs-468470 minikube.k8s.io/primary=true
I1228 07:19:40.542163 228126 ops.go:34] apiserver oom_adj: -16
I1228 07:19:40.542294 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:41.043207 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:41.542984 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:42.042481 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:42.542394 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:43.042699 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:43.542658 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:44.043094 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:44.542606 228126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1228 07:19:44.651612 228126 kubeadm.go:1114] duration metric: took 4.263355456s to wait for elevateKubeSystemPrivileges
I1228 07:19:44.651643 228126 kubeadm.go:403] duration metric: took 16.259370963s to StartCluster
I1228 07:19:44.651673 228126 settings.go:142] acquiring lock: {Name:mkd0957c79da89608d9af840389e3a7d694fc663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:44.651733 228126 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22352-2380/kubeconfig
I1228 07:19:44.652769 228126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2380/kubeconfig: {Name:mked7d6b3a17ba51b6f07689f1eb1c98c58f0940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1228 07:19:44.652992 228126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1228 07:19:44.653096 228126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1228 07:19:44.653366 228126 config.go:182] Loaded profile config "embed-certs-468470": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1228 07:19:44.653413 228126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1228 07:19:44.653474 228126 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-468470"
I1228 07:19:44.653490 228126 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-468470"
I1228 07:19:44.653517 228126 host.go:66] Checking if "embed-certs-468470" exists ...
I1228 07:19:44.654003 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:44.654328 228126 addons.go:70] Setting default-storageclass=true in profile "embed-certs-468470"
I1228 07:19:44.654361 228126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-468470"
I1228 07:19:44.654643 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:44.656956 228126 out.go:179] * Verifying Kubernetes components...
I1228 07:19:44.670552 228126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1228 07:19:44.678677 228126 addons.go:239] Setting addon default-storageclass=true in "embed-certs-468470"
I1228 07:19:44.678725 228126 host.go:66] Checking if "embed-certs-468470" exists ...
I1228 07:19:44.679154 228126 cli_runner.go:164] Run: docker container inspect embed-certs-468470 --format={{.State.Status}}
I1228 07:19:44.711673 228126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1228 07:19:44.711692 228126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1228 07:19:44.711762 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:44.712322 228126 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1228 07:19:44.718442 228126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1228 07:19:44.718466 228126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1228 07:19:44.718527 228126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-468470
I1228 07:19:44.741219 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:44.756182 228126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/22352-2380/.minikube/machines/embed-certs-468470/id_rsa Username:docker}
I1228 07:19:45.009747 228126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1228 07:19:45.026127 228126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1228 07:19:45.038234 228126 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1228 07:19:45.054117 228126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1228 07:19:45.691432 228126 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I1228 07:19:45.693665 228126 node_ready.go:35] waiting up to 6m0s for node "embed-certs-468470" to be "Ready" ...
I1228 07:19:46.056952 228126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002728114s)
I1228 07:19:46.060447 228126 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I1228 07:19:46.062691 228126 addons.go:530] duration metric: took 1.409273436s for enable addons: enabled=[default-storageclass storage-provisioner]
I1228 07:19:46.198573 228126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-468470" context rescaled to 1 replicas
W1228 07:19:47.696627 228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
W1228 07:19:49.697468 228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
W1228 07:19:52.196267 228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
W1228 07:19:54.696228 228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
I1228 07:19:59.133520 202182 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000148247s
I1228 07:19:59.133544 202182 kubeadm.go:319]
I1228 07:19:59.133603 202182 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1228 07:19:59.133636 202182 kubeadm.go:319] - The kubelet is not running
I1228 07:19:59.134115 202182 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1228 07:19:59.134145 202182 kubeadm.go:319]
I1228 07:19:59.134503 202182 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1228 07:19:59.134568 202182 kubeadm.go:319] - 'systemctl status kubelet'
I1228 07:19:59.134623 202182 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1228 07:19:59.134628 202182 kubeadm.go:319]
I1228 07:19:59.139795 202182 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1228 07:19:59.140678 202182 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1228 07:19:59.141000 202182 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1228 07:19:59.142131 202182 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1228 07:19:59.142218 202182 kubeadm.go:319]
I1228 07:19:59.142358 202182 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1228 07:19:59.142429 202182 kubeadm.go:403] duration metric: took 8m6.237794878s to StartCluster
I1228 07:19:59.142536 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.154918 202182 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.154991 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.166191 202182 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.166259 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.177549 202182 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.177619 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.188550 202182 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.188622 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.199522 202182 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.199608 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.222184 202182 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.222259 202182 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
E1228 07:19:59.238199 202182 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:19:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I1228 07:19:59.238223 202182 logs.go:123] Gathering logs for containerd ...
I1228 07:19:59.238235 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1228 07:19:59.285575 202182 logs.go:123] Gathering logs for container status ...
I1228 07:19:59.285608 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1228 07:19:59.317760 202182 logs.go:123] Gathering logs for kubelet ...
I1228 07:19:59.317788 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1228 07:19:59.379482 202182 logs.go:123] Gathering logs for dmesg ...
I1228 07:19:59.379521 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1228 07:19:59.397974 202182 logs.go:123] Gathering logs for describe nodes ...
I1228 07:19:59.398001 202182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1228 07:19:59.472720 202182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:19:59.462854 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.463664 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.466781 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.467148 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.468708 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1228 07:19:59.462854 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.463664 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.466781 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.467148 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:19:59.468708 4791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W1228 07:19:59.472790 202182 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.472889 202182 out.go:285] *
W1228 07:19:59.472973 202182 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.472991 202182 out.go:285] *
W1228 07:19:59.473250 202182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1228 07:19:59.480297 202182 out.go:203]
W1228 07:19:59.483295 202182 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000148247s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1228 07:19:59.483369 202182 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1228 07:19:59.483394 202182 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1228 07:19:59.486670 202182 out.go:203]
W1228 07:19:56.697226 228126 node_ready.go:57] node "embed-certs-468470" has "Ready":"False" status (will retry)
I1228 07:19:57.696380 228126 node_ready.go:49] node "embed-certs-468470" is "Ready"
I1228 07:19:57.696409 228126 node_ready.go:38] duration metric: took 12.00272085s for node "embed-certs-468470" to be "Ready" ...
I1228 07:19:57.696422 228126 api_server.go:52] waiting for apiserver process to appear ...
I1228 07:19:57.696498 228126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1228 07:19:57.708529 228126 api_server.go:72] duration metric: took 13.055498336s to wait for apiserver process to appear ...
I1228 07:19:57.708556 228126 api_server.go:88] waiting for apiserver healthz status ...
I1228 07:19:57.708576 228126 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1228 07:19:57.716881 228126 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
ok
I1228 07:19:57.718118 228126 api_server.go:141] control plane version: v1.35.0
I1228 07:19:57.718142 228126 api_server.go:131] duration metric: took 9.579269ms to wait for apiserver health ...
I1228 07:19:57.718152 228126 system_pods.go:43] waiting for kube-system pods to appear ...
I1228 07:19:57.721027 228126 system_pods.go:59] 8 kube-system pods found
I1228 07:19:57.721069 228126 system_pods.go:61] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1228 07:19:57.721076 228126 system_pods.go:61] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:57.721082 228126 system_pods.go:61] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:57.721087 228126 system_pods.go:61] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:57.721098 228126 system_pods.go:61] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:57.721102 228126 system_pods.go:61] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:57.721111 228126 system_pods.go:61] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:57.721117 228126 system_pods.go:61] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1228 07:19:57.721131 228126 system_pods.go:74] duration metric: took 2.972933ms to wait for pod list to return data ...
I1228 07:19:57.721138 228126 default_sa.go:34] waiting for default service account to be created ...
I1228 07:19:57.723530 228126 default_sa.go:45] found service account: "default"
I1228 07:19:57.723555 228126 default_sa.go:55] duration metric: took 2.41118ms for default service account to be created ...
I1228 07:19:57.723565 228126 system_pods.go:116] waiting for k8s-apps to be running ...
I1228 07:19:57.726275 228126 system_pods.go:86] 8 kube-system pods found
I1228 07:19:57.726311 228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1228 07:19:57.726319 228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:57.726326 228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:57.726341 228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:57.726347 228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:57.726358 228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:57.726367 228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:57.726378 228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1228 07:19:57.726402 228126 retry.go:84] will retry after 200ms: missing components: kube-dns
I1228 07:19:57.925706 228126 system_pods.go:86] 8 kube-system pods found
I1228 07:19:57.925747 228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1228 07:19:57.925755 228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:57.925762 228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:57.925767 228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:57.925773 228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:57.925777 228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:57.925784 228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:57.925796 228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1228 07:19:58.292842 228126 system_pods.go:86] 8 kube-system pods found
I1228 07:19:58.292878 228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1228 07:19:58.292886 228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:58.292892 228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:58.292899 228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:58.292904 228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:58.292909 228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:58.292916 228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:58.292928 228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1228 07:19:58.632221 228126 system_pods.go:86] 8 kube-system pods found
I1228 07:19:58.632280 228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1228 07:19:58.632292 228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:58.632310 228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:58.632329 228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:58.632339 228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:58.632355 228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:58.632371 228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:58.632391 228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1228 07:19:59.022266 228126 system_pods.go:86] 8 kube-system pods found
I1228 07:19:59.022302 228126 system_pods.go:89] "coredns-7d764666f9-p9hf5" [484845c3-af90-4711-b4c6-f539472eae52] Running
I1228 07:19:59.022309 228126 system_pods.go:89] "etcd-embed-certs-468470" [9f11fb5d-2431-4c88-bb78-4117aeaacfe3] Running
I1228 07:19:59.022315 228126 system_pods.go:89] "kindnet-tvkjv" [6dd9f279-29cb-44c4-954e-6119cec6b6ca] Running
I1228 07:19:59.022320 228126 system_pods.go:89] "kube-apiserver-embed-certs-468470" [06b4e0b1-2c97-4940-b10e-8d28f271feaf] Running
I1228 07:19:59.022326 228126 system_pods.go:89] "kube-controller-manager-embed-certs-468470" [e76d0d0d-b42d-4869-83b5-1333eac2625c] Running
I1228 07:19:59.022340 228126 system_pods.go:89] "kube-proxy-r6p5h" [23b27502-129e-42b1-b109-7cba9a746f06] Running
I1228 07:19:59.022348 228126 system_pods.go:89] "kube-scheduler-embed-certs-468470" [1e5f4fda-2f8c-454d-b9a7-1ed6f2937ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1228 07:19:59.022364 228126 system_pods.go:89] "storage-provisioner" [c3691d9b-ceb4-482a-b4aa-5344c24b485c] Running
I1228 07:19:59.022377 228126 system_pods.go:126] duration metric: took 1.298805695s to wait for k8s-apps to be running ...
I1228 07:19:59.022385 228126 system_svc.go:44] waiting for kubelet service to be running ....
I1228 07:19:59.022444 228126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1228 07:19:59.035825 228126 system_svc.go:56] duration metric: took 13.431751ms WaitForService to wait for kubelet
I1228 07:19:59.035905 228126 kubeadm.go:587] duration metric: took 14.382878883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1228 07:19:59.035947 228126 node_conditions.go:102] verifying NodePressure condition ...
I1228 07:19:59.039570 228126 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1228 07:19:59.039606 228126 node_conditions.go:123] node cpu capacity is 2
I1228 07:19:59.039626 228126 node_conditions.go:105] duration metric: took 3.643577ms to run NodePressure ...
I1228 07:19:59.039658 228126 start.go:242] waiting for startup goroutines ...
I1228 07:19:59.039677 228126 start.go:247] waiting for cluster config update ...
I1228 07:19:59.039688 228126 start.go:256] writing updated cluster config ...
I1228 07:19:59.039986 228126 ssh_runner.go:195] Run: rm -f paused
I1228 07:19:59.043553 228126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1228 07:19:59.047028 228126 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p9hf5" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.051181 228126 pod_ready.go:94] pod "coredns-7d764666f9-p9hf5" is "Ready"
I1228 07:19:59.051256 228126 pod_ready.go:86] duration metric: took 4.196853ms for pod "coredns-7d764666f9-p9hf5" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.053736 228126 pod_ready.go:83] waiting for pod "etcd-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.058200 228126 pod_ready.go:94] pod "etcd-embed-certs-468470" is "Ready"
I1228 07:19:59.058227 228126 pod_ready.go:86] duration metric: took 4.463711ms for pod "etcd-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.060566 228126 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.072788 228126 pod_ready.go:94] pod "kube-apiserver-embed-certs-468470" is "Ready"
I1228 07:19:59.072815 228126 pod_ready.go:86] duration metric: took 12.228105ms for pod "kube-apiserver-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.075450 228126 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.447893 228126 pod_ready.go:94] pod "kube-controller-manager-embed-certs-468470" is "Ready"
I1228 07:19:59.447917 228126 pod_ready.go:86] duration metric: took 372.444355ms for pod "kube-controller-manager-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:19:59.648394 228126 pod_ready.go:83] waiting for pod "kube-proxy-r6p5h" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:20:00.086250 228126 pod_ready.go:94] pod "kube-proxy-r6p5h" is "Ready"
I1228 07:20:00.086278 228126 pod_ready.go:86] duration metric: took 437.856104ms for pod "kube-proxy-r6p5h" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:20:00.302872 228126 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:20:00.648355 228126 pod_ready.go:94] pod "kube-scheduler-embed-certs-468470" is "Ready"
I1228 07:20:00.648390 228126 pod_ready.go:86] duration metric: took 345.489801ms for pod "kube-scheduler-embed-certs-468470" in "kube-system" namespace to be "Ready" or be gone ...
I1228 07:20:00.648438 228126 pod_ready.go:40] duration metric: took 1.604852026s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1228 07:20:00.720223 228126 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I1228 07:20:00.723439 228126 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318525688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318592888Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318697382Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318767077Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318826573Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318905154Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.318978828Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319039005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319099625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319184681Z" level=info msg="Connect containerd service"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.319572319Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.320220038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.334422500Z" level=info msg="Start subscribing containerd event"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.334531367Z" level=info msg="Start recovering state"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.335224994Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.335440044Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373191378Z" level=info msg="Start event monitor"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373254024Z" level=info msg="Start cni network conf syncer for default"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373264839Z" level=info msg="Start streaming server"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373273881Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373283095Z" level=info msg="runtime interface starting up..."
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373290103Z" level=info msg="starting plugins..."
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.373321036Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 28 07:11:51 force-systemd-flag-257442 systemd[1]: Started containerd.service - containerd container runtime.
Dec 28 07:11:51 force-systemd-flag-257442 containerd[760]: time="2025-12-28T07:11:51.375388370Z" level=info msg="containerd successfully booted in 0.084510s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1228 07:20:01.302796 4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:20:01.303556 4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:20:01.305353 4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:20:01.305933 4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1228 07:20:01.307761 4901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec28 06:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.505928] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.033587] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.752476] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +7.073877] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
07:20:01 up 1:02, 0 user, load average: 1.38, 1.59, 1.72
Linux force-systemd-flag-257442 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:19:58 force-systemd-flag-257442 kubelet[4723]: E1228 07:19:58.760376 4723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:19:58 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:19:59 force-systemd-flag-257442 kubelet[4795]: E1228 07:19:59.564120 4795 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:19:59 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:20:00 force-systemd-flag-257442 kubelet[4808]: E1228 07:20:00.419936 4808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:20:00 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 28 07:20:01 force-systemd-flag-257442 kubelet[4895]: E1228 07:20:01.291107 4895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 28 07:20:01 force-systemd-flag-257442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
** stderr **
E1228 07:20:01.034938 231528 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.049275 231528 logs.go:279] Failed to list containers for "etcd": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.062724 231528 logs.go:279] Failed to list containers for "coredns": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.075850 231528 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.089281 231528 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.103631 231528 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
E1228 07:20:01.116310 231528 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-12-28T07:20:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
** /stderr **
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257442 -n force-systemd-flag-257442
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257442 -n force-systemd-flag-257442: exit status 6 (404.837611ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1228 07:20:01.891695 231739 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257442" does not appear in /home/jenkins/minikube-integration/22352-2380/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-257442" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-257442" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-257442
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-257442: (2.068256005s)
--- FAIL: TestForceSystemdFlag (501.30s)