=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E1227 09:11:47.165732 4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m21.149585506s)
-- stdout --
* [force-systemd-flag-310604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22344
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-310604" primary control-plane node in "force-systemd-flag-310604" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I1227 09:10:42.800135 204666 out.go:360] Setting OutFile to fd 1 ...
I1227 09:10:42.800310 204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:10:42.800324 204666 out.go:374] Setting ErrFile to fd 2...
I1227 09:10:42.800331 204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:10:42.800714 204666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 09:10:42.801241 204666 out.go:368] Setting JSON to false
I1227 09:10:42.802140 204666 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3196,"bootTime":1766823447,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1227 09:10:42.802232 204666 start.go:143] virtualization:
I1227 09:10:42.805730 204666 out.go:179] * [force-systemd-flag-310604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 09:10:42.808307 204666 out.go:179] - MINIKUBE_LOCATION=22344
I1227 09:10:42.808421 204666 notify.go:221] Checking for updates...
I1227 09:10:42.814703 204666 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 09:10:42.817982 204666 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
I1227 09:10:42.821099 204666 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
I1227 09:10:42.824151 204666 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 09:10:42.827145 204666 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 09:10:42.830746 204666 config.go:182] Loaded profile config "force-systemd-env-145961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:10:42.830898 204666 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 09:10:42.863134 204666 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 09:10:42.863319 204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:10:42.918342 204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.908953528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:10:42.918445 204666 docker.go:319] overlay module found
I1227 09:10:42.921686 204666 out.go:179] * Using the docker driver based on user configuration
I1227 09:10:42.924651 204666 start.go:309] selected driver: docker
I1227 09:10:42.924672 204666 start.go:928] validating driver "docker" against <nil>
I1227 09:10:42.924685 204666 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 09:10:42.925399 204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:10:43.013713 204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.997716009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:10:43.013872 204666 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 09:10:43.014115 204666 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 09:10:43.017181 204666 out.go:179] * Using Docker driver with root privileges
I1227 09:10:43.020064 204666 cni.go:84] Creating CNI manager for ""
I1227 09:10:43.020140 204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 09:10:43.020159 204666 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 09:10:43.020250 204666 start.go:353] cluster config:
{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:10:43.023468 204666 out.go:179] * Starting "force-systemd-flag-310604" primary control-plane node in "force-systemd-flag-310604" cluster
I1227 09:10:43.026267 204666 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 09:10:43.029182 204666 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 09:10:43.032164 204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:10:43.032206 204666 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1227 09:10:43.032217 204666 cache.go:65] Caching tarball of preloaded images
I1227 09:10:43.032253 204666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 09:10:43.032309 204666 preload.go:251] Found /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 09:10:43.032319 204666 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1227 09:10:43.032459 204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
I1227 09:10:43.032480 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json: {Name:mkbc9c01b6cdf50a409317d5cc6b1625281e0c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:43.051266 204666 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 09:10:43.051291 204666 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 09:10:43.051312 204666 cache.go:243] Successfully downloaded all kic artifacts
I1227 09:10:43.051342 204666 start.go:360] acquireMachinesLock for force-systemd-flag-310604: {Name:mk07b16eff3a374cb7598dd22df6b68eafb28bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:10:43.051447 204666 start.go:364] duration metric: took 84.235µs to acquireMachinesLock for "force-systemd-flag-310604"
I1227 09:10:43.051477 204666 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 09:10:43.051550 204666 start.go:125] createHost starting for "" (driver="docker")
I1227 09:10:43.055029 204666 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 09:10:43.055272 204666 start.go:159] libmachine.API.Create for "force-systemd-flag-310604" (driver="docker")
I1227 09:10:43.055308 204666 client.go:173] LocalClient.Create starting
I1227 09:10:43.055382 204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem
I1227 09:10:43.055425 204666 main.go:144] libmachine: Decoding PEM data...
I1227 09:10:43.055445 204666 main.go:144] libmachine: Parsing certificate...
I1227 09:10:43.055497 204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem
I1227 09:10:43.055523 204666 main.go:144] libmachine: Decoding PEM data...
I1227 09:10:43.055539 204666 main.go:144] libmachine: Parsing certificate...
I1227 09:10:43.055903 204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:10:43.071470 204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:10:43.071558 204666 network_create.go:284] running [docker network inspect force-systemd-flag-310604] to gather additional debugging logs...
I1227 09:10:43.071581 204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604
W1227 09:10:43.087467 204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 returned with exit code 1
I1227 09:10:43.087522 204666 network_create.go:287] error running [docker network inspect force-systemd-flag-310604]: docker network inspect force-systemd-flag-310604: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-310604 not found
I1227 09:10:43.087536 204666 network_create.go:289] output of [docker network inspect force-systemd-flag-310604]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-310604 not found
** /stderr **
I1227 09:10:43.087649 204666 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:10:43.105322 204666 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3499bc401779 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:76:98:a8:d7:e7} reservation:<nil>}
I1227 09:10:43.105737 204666 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c1260ea8a496 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:1e:3f:a3:f0:1f} reservation:<nil>}
I1227 09:10:43.106114 204666 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5173b3fb685 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:6a:35:6e:4e:02} reservation:<nil>}
I1227 09:10:43.106601 204666 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a15060}
I1227 09:10:43.106630 204666 network_create.go:124] attempt to create docker network force-systemd-flag-310604 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1227 09:10:43.106687 204666 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-310604 force-systemd-flag-310604
I1227 09:10:43.181323 204666 network_create.go:108] docker network force-systemd-flag-310604 192.168.76.0/24 created
I1227 09:10:43.181368 204666 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-310604" container
I1227 09:10:43.181450 204666 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 09:10:43.199791 204666 cli_runner.go:164] Run: docker volume create force-systemd-flag-310604 --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true
I1227 09:10:43.217217 204666 oci.go:103] Successfully created a docker volume force-systemd-flag-310604
I1227 09:10:43.217303 204666 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-310604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --entrypoint /usr/bin/test -v force-systemd-flag-310604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 09:10:43.768592 204666 oci.go:107] Successfully prepared a docker volume force-systemd-flag-310604
I1227 09:10:43.768647 204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:10:43.768656 204666 kic.go:194] Starting extracting preloaded images to volume ...
I1227 09:10:43.768730 204666 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 09:10:47.941425 204666 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.172659446s)
I1227 09:10:47.941459 204666 kic.go:203] duration metric: took 4.172798697s to extract preloaded images to volume ...
W1227 09:10:47.941608 204666 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 09:10:47.941723 204666 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 09:10:48.016863 204666 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-310604 --name force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-310604 --network force-systemd-flag-310604 --ip 192.168.76.2 --volume force-systemd-flag-310604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 09:10:48.339827 204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Running}}
I1227 09:10:48.361703 204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
I1227 09:10:48.386273 204666 cli_runner.go:164] Run: docker exec force-systemd-flag-310604 stat /var/lib/dpkg/alternatives/iptables
I1227 09:10:48.435149 204666 oci.go:144] the created container "force-systemd-flag-310604" has a running status.
I1227 09:10:48.435183 204666 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa...
I1227 09:10:48.595417 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 09:10:48.595508 204666 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 09:10:48.621694 204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
I1227 09:10:48.646093 204666 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 09:10:48.646113 204666 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-310604 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 09:10:48.702415 204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
I1227 09:10:48.724275 204666 machine.go:94] provisionDockerMachine start ...
I1227 09:10:48.724381 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:48.753127 204666 main.go:144] libmachine: Using SSH client type: native
I1227 09:10:48.753463 204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1227 09:10:48.753473 204666 main.go:144] libmachine: About to run SSH command:
hostname
I1227 09:10:48.754067 204666 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48476->127.0.0.1:33043: read: connection reset by peer
I1227 09:10:51.891685 204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
I1227 09:10:51.891708 204666 ubuntu.go:182] provisioning hostname "force-systemd-flag-310604"
I1227 09:10:51.891772 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:51.909491 204666 main.go:144] libmachine: Using SSH client type: native
I1227 09:10:51.909807 204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1227 09:10:51.909825 204666 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-310604 && echo "force-systemd-flag-310604" | sudo tee /etc/hostname
I1227 09:10:52.057961 204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
I1227 09:10:52.058064 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:52.075700 204666 main.go:144] libmachine: Using SSH client type: native
I1227 09:10:52.076053 204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1227 09:10:52.076078 204666 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-310604' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-310604/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-310604' | sudo tee -a /etc/hosts;
fi
fi
I1227 09:10:52.217368 204666 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 09:10:52.217456 204666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
I1227 09:10:52.217491 204666 ubuntu.go:190] setting up certificates
I1227 09:10:52.217534 204666 provision.go:84] configureAuth start
I1227 09:10:52.217619 204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
I1227 09:10:52.237744 204666 provision.go:143] copyHostCerts
I1227 09:10:52.237795 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
I1227 09:10:52.237833 204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
I1227 09:10:52.237841 204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
I1227 09:10:52.238083 204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
I1227 09:10:52.238190 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
I1227 09:10:52.238504 204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
I1227 09:10:52.238511 204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
I1227 09:10:52.238894 204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
I1227 09:10:52.239000 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
I1227 09:10:52.239017 204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
I1227 09:10:52.239022 204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
I1227 09:10:52.239052 204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
I1227 09:10:52.239110 204666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-310604 san=[127.0.0.1 192.168.76.2 force-systemd-flag-310604 localhost minikube]
I1227 09:10:52.569945 204666 provision.go:177] copyRemoteCerts
I1227 09:10:52.570044 204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 09:10:52.570093 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:52.587912 204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
I1227 09:10:52.687698 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 09:10:52.687844 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 09:10:52.705320 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 09:10:52.705381 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 09:10:52.723327 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 09:10:52.723385 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1227 09:10:52.740566 204666 provision.go:87] duration metric: took 522.993586ms to configureAuth
I1227 09:10:52.740592 204666 ubuntu.go:206] setting minikube options for container-runtime
I1227 09:10:52.740766 204666 config.go:182] Loaded profile config "force-systemd-flag-310604": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:10:52.740780 204666 machine.go:97] duration metric: took 4.016481436s to provisionDockerMachine
I1227 09:10:52.740787 204666 client.go:176] duration metric: took 9.685467552s to LocalClient.Create
I1227 09:10:52.740816 204666 start.go:167] duration metric: took 9.685545363s to libmachine.API.Create "force-systemd-flag-310604"
I1227 09:10:52.740827 204666 start.go:293] postStartSetup for "force-systemd-flag-310604" (driver="docker")
I1227 09:10:52.740837 204666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 09:10:52.740910 204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 09:10:52.740954 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:52.757935 204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
I1227 09:10:52.856170 204666 ssh_runner.go:195] Run: cat /etc/os-release
I1227 09:10:52.859510 204666 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 09:10:52.859542 204666 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 09:10:52.859553 204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
I1227 09:10:52.859606 204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
I1227 09:10:52.859688 204666 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
I1227 09:10:52.859699 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /etc/ssl/certs/42882.pem
I1227 09:10:52.859802 204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 09:10:52.867151 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
I1227 09:10:52.884851 204666 start.go:296] duration metric: took 144.00855ms for postStartSetup
I1227 09:10:52.885206 204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
I1227 09:10:52.901828 204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
I1227 09:10:52.902117 204666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 09:10:52.902171 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:52.918960 204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
I1227 09:10:53.021390 204666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 09:10:53.026246 204666 start.go:128] duration metric: took 9.974681148s to createHost
I1227 09:10:53.026316 204666 start.go:83] releasing machines lock for "force-systemd-flag-310604", held for 9.974853178s
I1227 09:10:53.026407 204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
I1227 09:10:53.043542 204666 ssh_runner.go:195] Run: cat /version.json
I1227 09:10:53.043598 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:53.043860 204666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 09:10:53.043921 204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
I1227 09:10:53.061875 204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
I1227 09:10:53.068175 204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
I1227 09:10:53.255401 204666 ssh_runner.go:195] Run: systemctl --version
I1227 09:10:53.262139 204666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 09:10:53.266534 204666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 09:10:53.266627 204666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 09:10:53.295238 204666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 09:10:53.295259 204666 start.go:496] detecting cgroup driver to use...
I1227 09:10:53.295273 204666 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 09:10:53.295340 204666 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 09:10:53.310658 204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 09:10:53.324980 204666 docker.go:218] disabling cri-docker service (if available) ...
I1227 09:10:53.325045 204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 09:10:53.342693 204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 09:10:53.361786 204666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 09:10:53.481591 204666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 09:10:53.609612 204666 docker.go:234] disabling docker service ...
I1227 09:10:53.609677 204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 09:10:53.632809 204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 09:10:53.646556 204666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 09:10:53.776893 204666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 09:10:53.893803 204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 09:10:53.906923 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:10:53.921921 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 09:10:53.930787 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 09:10:53.940192 204666 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 09:10:53.940311 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 09:10:53.949596 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:10:53.959130 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 09:10:53.967866 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:10:53.977401 204666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 09:10:53.985565 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 09:10:53.994878 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 09:10:54.004397 204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 09:10:54.016162 204666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 09:10:54.025513 204666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 09:10:54.034319 204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:10:54.150756 204666 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 09:10:54.285989 204666 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 09:10:54.286115 204666 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 09:10:54.290075 204666 start.go:574] Will wait 60s for crictl version
I1227 09:10:54.290185 204666 ssh_runner.go:195] Run: which crictl
I1227 09:10:54.293949 204666 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 09:10:54.321666 204666 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 09:10:54.321783 204666 ssh_runner.go:195] Run: containerd --version
I1227 09:10:54.345867 204666 ssh_runner.go:195] Run: containerd --version
I1227 09:10:54.376785 204666 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 09:10:54.379751 204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:10:54.401792 204666 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1227 09:10:54.406481 204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:10:54.416271 204666 kubeadm.go:884] updating cluster {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 09:10:54.416393 204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:10:54.416457 204666 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 09:10:54.444036 204666 containerd.go:635] all images are preloaded for containerd runtime.
I1227 09:10:54.444061 204666 containerd.go:542] Images already preloaded, skipping extraction
I1227 09:10:54.444118 204666 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 09:10:54.485541 204666 containerd.go:635] all images are preloaded for containerd runtime.
I1227 09:10:54.485561 204666 cache_images.go:86] Images are preloaded, skipping loading
I1227 09:10:54.485569 204666 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I1227 09:10:54.485974 204666 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-310604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 09:10:54.486092 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 09:10:54.526429 204666 cni.go:84] Creating CNI manager for ""
I1227 09:10:54.526503 204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 09:10:54.526540 204666 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 09:10:54.526596 204666 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-310604 NodeName:force-systemd-flag-310604 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 09:10:54.526756 204666 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-310604"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 09:10:54.526867 204666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 09:10:54.534776 204666 binaries.go:51] Found k8s binaries, skipping transfer
I1227 09:10:54.534862 204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 09:10:54.542666 204666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1227 09:10:54.555276 204666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 09:10:54.568252 204666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1227 09:10:54.581175 204666 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1227 09:10:54.584678 204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:10:54.594342 204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:10:54.722742 204666 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:10:54.739944 204666 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604 for IP: 192.168.76.2
I1227 09:10:54.739989 204666 certs.go:195] generating shared ca certs ...
I1227 09:10:54.740005 204666 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:54.740163 204666 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
I1227 09:10:54.740222 204666 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
I1227 09:10:54.740235 204666 certs.go:257] generating profile certs ...
I1227 09:10:54.740300 204666 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key
I1227 09:10:54.740327 204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt with IP's: []
I1227 09:10:54.883927 204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt ...
I1227 09:10:54.883962 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt: {Name:mkaf7a59941c35faf8629e9c6734e607330f0676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:54.884180 204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key ...
I1227 09:10:54.884200 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key: {Name:mk15fe73d8be76bfb61d2cf22a9a54c4980a1213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:54.884320 204666 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c
I1227 09:10:54.884341 204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1227 09:10:55.261500 204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c ...
I1227 09:10:55.261538 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c: {Name:mkd8a84348a7ab947593ad31a2bf6eac08baadd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:55.261722 204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c ...
I1227 09:10:55.261739 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c: {Name:mk0b1844eb49c1d885fbeaa194740cfbf0f66c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:55.261815 204666 certs.go:382] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt
I1227 09:10:55.261907 204666 certs.go:386] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key
I1227 09:10:55.261975 204666 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key
I1227 09:10:55.261997 204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt with IP's: []
I1227 09:10:55.489265 204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt ...
I1227 09:10:55.489301 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt: {Name:mkd9f18caf462c3a8d2a28c4ddec386f0dbd816a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:55.489549 204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key ...
I1227 09:10:55.489567 204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key: {Name:mk9ff37441688b65bb6af030e9075e756fa5b4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:10:55.489687 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 09:10:55.489718 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 09:10:55.489742 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 09:10:55.489765 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 09:10:55.489782 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 09:10:55.489806 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 09:10:55.489826 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 09:10:55.489837 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 09:10:55.489910 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
W1227 09:10:55.489959 204666 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
I1227 09:10:55.489975 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
I1227 09:10:55.490010 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
I1227 09:10:55.490045 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
I1227 09:10:55.490073 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
I1227 09:10:55.490121 204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
I1227 09:10:55.490158 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 09:10:55.490176 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem -> /usr/share/ca-certificates/4288.pem
I1227 09:10:55.490197 204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /usr/share/ca-certificates/42882.pem
I1227 09:10:55.490797 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 09:10:55.520180 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
I1227 09:10:55.539728 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 09:10:55.558726 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 09:10:55.577125 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 09:10:55.595030 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 09:10:55.612583 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 09:10:55.629890 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 09:10:55.647395 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 09:10:55.664281 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
I1227 09:10:55.682209 204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
I1227 09:10:55.699375 204666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 09:10:55.713225 204666 ssh_runner.go:195] Run: openssl version
I1227 09:10:55.719549 204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
I1227 09:10:55.726782 204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
I1227 09:10:55.734088 204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
I1227 09:10:55.737803 204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
I1227 09:10:55.737867 204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
I1227 09:10:55.779013 204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 09:10:55.786846 204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42882.pem /etc/ssl/certs/3ec20f2e.0
I1227 09:10:55.794882 204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 09:10:55.802676 204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 09:10:55.810367 204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 09:10:55.814525 204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 09:10:55.814592 204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 09:10:55.856125 204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 09:10:55.863440 204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 09:10:55.870807 204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
I1227 09:10:55.877797 204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
I1227 09:10:55.885325 204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
I1227 09:10:55.889003 204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
I1227 09:10:55.889078 204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
I1227 09:10:55.930128 204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 09:10:55.937477 204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4288.pem /etc/ssl/certs/51391683.0
I1227 09:10:55.944699 204666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 09:10:55.948214 204666 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 09:10:55.948267 204666 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:10:55.948345 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 09:10:55.948412 204666 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 09:10:55.985105 204666 cri.go:96] found id: ""
I1227 09:10:55.985202 204666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 09:10:55.994392 204666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 09:10:56.002476 204666 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 09:10:56.002588 204666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 09:10:56.013561 204666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 09:10:56.013641 204666 kubeadm.go:158] found existing configuration files:
I1227 09:10:56.013734 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 09:10:56.026163 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 09:10:56.026252 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 09:10:56.034452 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 09:10:56.042951 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 09:10:56.043043 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 09:10:56.051250 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 09:10:56.059162 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 09:10:56.059229 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 09:10:56.066603 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 09:10:56.074518 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 09:10:56.074592 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 09:10:56.081945 204666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 09:10:56.121942 204666 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 09:10:56.122047 204666 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 09:10:56.212923 204666 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 09:10:56.213040 204666 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 09:10:56.213099 204666 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 09:10:56.213162 204666 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 09:10:56.213227 204666 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 09:10:56.213298 204666 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 09:10:56.213364 204666 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 09:10:56.213434 204666 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 09:10:56.213512 204666 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 09:10:56.213583 204666 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 09:10:56.213655 204666 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 09:10:56.213718 204666 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 09:10:56.276575 204666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 09:10:56.276758 204666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 09:10:56.276888 204666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 09:10:56.284403 204666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 09:10:56.290757 204666 out.go:252] - Generating certificates and keys ...
I1227 09:10:56.290854 204666 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 09:10:56.290926 204666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 09:10:56.622516 204666 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 09:10:57.129861 204666 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 09:10:57.426106 204666 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 09:10:57.593509 204666 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 09:10:57.874524 204666 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 09:10:57.874936 204666 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:10:58.122828 204666 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 09:10:58.123152 204666 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 09:10:58.265970 204666 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 09:10:58.561360 204666 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 09:10:58.701478 204666 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 09:10:58.701573 204666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 09:10:58.886739 204666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 09:10:59.201465 204666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 09:11:00.021317 204666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 09:11:00.354783 204666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 09:11:00.706525 204666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 09:11:00.707614 204666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 09:11:00.710676 204666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 09:11:00.714234 204666 out.go:252] - Booting up control plane ...
I1227 09:11:00.714348 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 09:11:00.714433 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 09:11:00.720333 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 09:11:00.746371 204666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 09:11:00.746513 204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 09:11:00.754160 204666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 09:11:00.754510 204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 09:11:00.754557 204666 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 09:11:00.882317 204666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 09:11:00.882439 204666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 09:15:00.883060 204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001055113s
I1227 09:15:00.883095 204666 kubeadm.go:319]
I1227 09:15:00.883153 204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 09:15:00.883192 204666 kubeadm.go:319] - The kubelet is not running
I1227 09:15:00.883301 204666 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 09:15:00.883311 204666 kubeadm.go:319]
I1227 09:15:00.883416 204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 09:15:00.883451 204666 kubeadm.go:319] - 'systemctl status kubelet'
I1227 09:15:00.883488 204666 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 09:15:00.883494 204666 kubeadm.go:319]
I1227 09:15:00.894305 204666 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 09:15:00.894751 204666 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 09:15:00.894868 204666 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 09:15:00.895118 204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 09:15:00.895131 204666 kubeadm.go:319]
I1227 09:15:00.895203 204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 09:15:00.895331 204666 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001055113s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001055113s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1227 09:15:00.895433 204666 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1227 09:15:01.313761 204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 09:15:01.333363 204666 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 09:15:01.333466 204666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 09:15:01.342629 204666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 09:15:01.342662 204666 kubeadm.go:158] found existing configuration files:
I1227 09:15:01.342749 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 09:15:01.353052 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 09:15:01.353146 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 09:15:01.361396 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 09:15:01.369967 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 09:15:01.370034 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 09:15:01.378378 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 09:15:01.387663 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 09:15:01.387748 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 09:15:01.396344 204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 09:15:01.405204 204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 09:15:01.405270 204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 09:15:01.413447 204666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 09:15:01.463956 204666 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 09:15:01.464308 204666 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 09:15:01.552614 204666 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 09:15:01.552773 204666 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 09:15:01.552857 204666 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 09:15:01.552946 204666 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 09:15:01.553026 204666 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 09:15:01.553108 204666 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 09:15:01.553189 204666 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 09:15:01.553273 204666 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 09:15:01.553355 204666 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 09:15:01.553433 204666 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 09:15:01.553518 204666 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 09:15:01.553597 204666 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 09:15:01.623916 204666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 09:15:01.624121 204666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 09:15:01.624266 204666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 09:15:01.629993 204666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 09:15:01.633473 204666 out.go:252] - Generating certificates and keys ...
I1227 09:15:01.633564 204666 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 09:15:01.633648 204666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 09:15:01.633732 204666 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 09:15:01.633816 204666 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 09:15:01.633931 204666 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 09:15:01.634153 204666 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 09:15:01.634509 204666 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 09:15:01.634871 204666 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 09:15:01.635227 204666 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 09:15:01.635557 204666 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 09:15:01.635839 204666 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 09:15:01.635902 204666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 09:15:02.134928 204666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 09:15:02.350950 204666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 09:15:02.446843 204666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 09:15:02.770471 204666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 09:15:03.012958 204666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 09:15:03.014723 204666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 09:15:03.017019 204666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 09:15:03.020102 204666 out.go:252] - Booting up control plane ...
I1227 09:15:03.020229 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 09:15:03.020308 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 09:15:03.022628 204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 09:15:03.047367 204666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 09:15:03.047562 204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 09:15:03.054984 204666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 09:15:03.055427 204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 09:15:03.055683 204666 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 09:15:03.202781 204666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 09:15:03.202915 204666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 09:19:03.204484 204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000488648s
I1227 09:19:03.204509 204666 kubeadm.go:319]
I1227 09:19:03.204566 204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 09:19:03.204600 204666 kubeadm.go:319] - The kubelet is not running
I1227 09:19:03.204705 204666 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 09:19:03.204710 204666 kubeadm.go:319]
I1227 09:19:03.204814 204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 09:19:03.204846 204666 kubeadm.go:319] - 'systemctl status kubelet'
I1227 09:19:03.204877 204666 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 09:19:03.204881 204666 kubeadm.go:319]
I1227 09:19:03.217785 204666 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 09:19:03.218533 204666 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 09:19:03.218725 204666 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 09:19:03.219191 204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1227 09:19:03.219198 204666 kubeadm.go:319]
I1227 09:19:03.219319 204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 09:19:03.219385 204666 kubeadm.go:403] duration metric: took 8m7.271122438s to StartCluster
I1227 09:19:03.219439 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 09:19:03.219506 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 09:19:03.293507 204666 cri.go:96] found id: ""
I1227 09:19:03.293587 204666 logs.go:282] 0 containers: []
W1227 09:19:03.293612 204666 logs.go:284] No container was found matching "kube-apiserver"
I1227 09:19:03.293653 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 09:19:03.293737 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 09:19:03.328940 204666 cri.go:96] found id: ""
I1227 09:19:03.328973 204666 logs.go:282] 0 containers: []
W1227 09:19:03.328982 204666 logs.go:284] No container was found matching "etcd"
I1227 09:19:03.328990 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 09:19:03.329064 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 09:19:03.374166 204666 cri.go:96] found id: ""
I1227 09:19:03.374236 204666 logs.go:282] 0 containers: []
W1227 09:19:03.374260 204666 logs.go:284] No container was found matching "coredns"
I1227 09:19:03.374286 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 09:19:03.374375 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 09:19:03.422359 204666 cri.go:96] found id: ""
I1227 09:19:03.422395 204666 logs.go:282] 0 containers: []
W1227 09:19:03.422405 204666 logs.go:284] No container was found matching "kube-scheduler"
I1227 09:19:03.422411 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 09:19:03.422486 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 09:19:03.481975 204666 cri.go:96] found id: ""
I1227 09:19:03.482015 204666 logs.go:282] 0 containers: []
W1227 09:19:03.482024 204666 logs.go:284] No container was found matching "kube-proxy"
I1227 09:19:03.482030 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 09:19:03.482095 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 09:19:03.538264 204666 cri.go:96] found id: ""
I1227 09:19:03.538290 204666 logs.go:282] 0 containers: []
W1227 09:19:03.538300 204666 logs.go:284] No container was found matching "kube-controller-manager"
I1227 09:19:03.538307 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 09:19:03.538373 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 09:19:03.592079 204666 cri.go:96] found id: ""
I1227 09:19:03.592102 204666 logs.go:282] 0 containers: []
W1227 09:19:03.592110 204666 logs.go:284] No container was found matching "kindnet"
I1227 09:19:03.592121 204666 logs.go:123] Gathering logs for describe nodes ...
I1227 09:19:03.592134 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 09:19:03.692446 204666 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 09:19:03.683947 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.684806 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686564 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686877 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.688421 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 09:19:03.683947 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.684806 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686564 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686877 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.688421 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 09:19:03.692475 204666 logs.go:123] Gathering logs for containerd ...
I1227 09:19:03.692487 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 09:19:03.740848 204666 logs.go:123] Gathering logs for container status ...
I1227 09:19:03.740925 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 09:19:03.782208 204666 logs.go:123] Gathering logs for kubelet ...
I1227 09:19:03.782242 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 09:19:03.874946 204666 logs.go:123] Gathering logs for dmesg ...
I1227 09:19:03.874978 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1227 09:19:03.889356 204666 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.889407 204666 out.go:285] *
*
W1227 09:19:03.889455 204666 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.889475 204666 out.go:285] *
*
W1227 09:19:03.889727 204666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 09:19:03.894675 204666 out.go:203]
W1227 09:19:03.897830 204666 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.897891 204666 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 09:19:03.897912 204666 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 09:19:03.901086 204666 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-310604 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 09:19:04.402615979 +0000 UTC m=+3062.674909962
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-310604
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-310604:
-- stdout --
[
{
"Id": "47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7",
"Created": "2025-12-27T09:10:48.033403799Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 205114,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-27T09:10:48.111175804Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
"ResolvConfPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/hostname",
"HostsPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/hosts",
"LogPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7-json.log",
"Name": "/force-systemd-flag-310604",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-310604:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-310604",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7",
"LowerDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d-init/diff:/var/lib/docker/overlay2/c2f1250c3b92b032a53152a31400b908e250d3d45594ebbf65fa51d032f3248a/diff",
"MergedDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/merged",
"UpperDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/diff",
"WorkDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-310604",
"Source": "/var/lib/docker/volumes/force-systemd-flag-310604/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-310604",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-310604",
"name.minikube.sigs.k8s.io": "force-systemd-flag-310604",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1e96687a422df76ac3188f2c47b05adc19dd7f8be7690a9adb99feca2abb9143",
"SandboxKey": "/var/run/docker/netns/1e96687a422d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33043"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33044"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33047"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33045"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33046"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-310604": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "7e:54:c4:15:68:de",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "775f708a5f85a72bd8cf9cd7fbcfcc4ed9e02d7cba71aadca90a595a328140fc",
"EndpointID": "cf7c3500c9a0740602bf99c4058772b58dc8eefa4147300a2aeaa8438e4cd2e7",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-310604",
"47e3944629b1"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-310604 -n force-systemd-flag-310604
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-310604 -n force-systemd-flag-310604: exit status 6 (329.823587ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 09:19:04.743339 230453 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-310604" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-310604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p force-systemd-flag-310604 logs -n 25: (1.157040968s)
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ delete │ -p force-systemd-env-145961 │ force-systemd-env-145961 │ jenkins │ v1.37.0 │ 27 Dec 25 09:13 UTC │ 27 Dec 25 09:13 UTC │
│ start │ -p cert-options-229858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=containerd │ cert-options-229858 │ jenkins │ v1.37.0 │ 27 Dec 25 09:13 UTC │ 27 Dec 25 09:14 UTC │
│ ssh │ cert-options-229858 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt │ cert-options-229858 │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
│ ssh │ -p cert-options-229858 -- sudo cat /etc/kubernetes/admin.conf │ cert-options-229858 │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
│ delete │ -p cert-options-229858 │ cert-options-229858 │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
│ start │ -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:15 UTC │
│ addons │ enable metrics-server -p old-k8s-version-046838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
│ stop │ -p old-k8s-version-046838 --alsologtostderr -v=3 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
│ addons │ enable dashboard -p old-k8s-version-046838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
│ start │ -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:16 UTC │
│ image │ old-k8s-version-046838 image list --format=json │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
│ pause │ -p old-k8s-version-046838 --alsologtostderr -v=1 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
│ unpause │ -p old-k8s-version-046838 --alsologtostderr -v=1 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
│ delete │ -p old-k8s-version-046838 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
│ delete │ -p old-k8s-version-046838 │ old-k8s-version-046838 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
│ start │ -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:17 UTC │
│ addons │ enable metrics-server -p no-preload-524171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
│ stop │ -p no-preload-524171 --alsologtostderr -v=3 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
│ addons │ enable dashboard -p no-preload-524171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
│ start │ -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:18 UTC │
│ image │ no-preload-524171 image list --format=json │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:18 UTC │
│ pause │ -p no-preload-524171 --alsologtostderr -v=1 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:19 UTC │
│ unpause │ -p no-preload-524171 --alsologtostderr -v=1 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │ 27 Dec 25 09:19 UTC │
│ delete │ -p no-preload-524171 │ no-preload-524171 │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │ │
│ ssh │ force-systemd-flag-310604 ssh cat /etc/containerd/config.toml │ force-systemd-flag-310604 │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │ 27 Dec 25 09:19 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/27 09:17:58
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 09:17:58.400903 226201 out.go:360] Setting OutFile to fd 1 ...
I1227 09:17:58.401087 226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:17:58.401114 226201 out.go:374] Setting ErrFile to fd 2...
I1227 09:17:58.401134 226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:17:58.401431 226201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 09:17:58.401848 226201 out.go:368] Setting JSON to false
I1227 09:17:58.402717 226201 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3632,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1227 09:17:58.402841 226201 start.go:143] virtualization:
I1227 09:17:58.407933 226201 out.go:179] * [no-preload-524171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 09:17:58.411055 226201 notify.go:221] Checking for updates...
I1227 09:17:58.414165 226201 out.go:179] - MINIKUBE_LOCATION=22344
I1227 09:17:58.417195 226201 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 09:17:58.420122 226201 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
I1227 09:17:58.423131 226201 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
I1227 09:17:58.426044 226201 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 09:17:58.429148 226201 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 09:17:58.432664 226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:17:58.433273 226201 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 09:17:58.459759 226201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 09:17:58.459879 226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:17:58.523524 226201 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:17:58.513905546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:17:58.523637 226201 docker.go:319] overlay module found
I1227 09:17:58.526829 226201 out.go:179] * Using the docker driver based on existing profile
I1227 09:17:58.529610 226201 start.go:309] selected driver: docker
I1227 09:17:58.529636 226201 start.go:928] validating driver "docker" against &{Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:17:58.529746 226201 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 09:17:58.530463 226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 09:17:58.585148 226201 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:17:58.575093532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 09:17:58.585477 226201 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 09:17:58.585512 226201 cni.go:84] Creating CNI manager for ""
I1227 09:17:58.585568 226201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 09:17:58.585612 226201 start.go:353] cluster config:
{Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:17:58.590571 226201 out.go:179] * Starting "no-preload-524171" primary control-plane node in "no-preload-524171" cluster
I1227 09:17:58.593397 226201 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 09:17:58.596516 226201 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 09:17:58.599394 226201 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:17:58.599475 226201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 09:17:58.599543 226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/config.json ...
I1227 09:17:58.599837 226201 cache.go:107] acquiring lock: {Name:mke1e922d7eb2a2676149298b5dba45833ae8879 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.599917 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1227 09:17:58.599932 226201 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.033µs
I1227 09:17:58.599952 226201 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1227 09:17:58.600044 226201 cache.go:107] acquiring lock: {Name:mk6507c82ba2441dda683a90107aed49c8f037b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600091 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
I1227 09:17:58.600102 226201 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 139.128µs
I1227 09:17:58.600109 226201 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
I1227 09:17:58.600125 226201 cache.go:107] acquiring lock: {Name:mk8855e1f661e2dc77ec51f38d05c8826759bdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600159 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
I1227 09:17:58.600169 226201 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 44.514µs
I1227 09:17:58.600175 226201 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
I1227 09:17:58.600184 226201 cache.go:107] acquiring lock: {Name:mk573ecca5f6c5e3847e355240d192409babe6a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600221 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
I1227 09:17:58.600230 226201 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 47.073µs
I1227 09:17:58.600238 226201 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
I1227 09:17:58.600247 226201 cache.go:107] acquiring lock: {Name:mka17529d9dfa557ae96a2eab8e7ada7a86a0715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600275 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
I1227 09:17:58.600284 226201 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 37.383µs
I1227 09:17:58.600290 226201 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
I1227 09:17:58.600309 226201 cache.go:107] acquiring lock: {Name:mkb3bf38af1b254286e4b9cb77de8e4fb8511831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600340 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1227 09:17:58.600349 226201 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 49.995µs
I1227 09:17:58.600355 226201 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1227 09:17:58.600364 226201 cache.go:107] acquiring lock: {Name:mk2b2169d020c5fd5946a8dee42079f4cde09f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600393 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
I1227 09:17:58.600405 226201 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.869µs
I1227 09:17:58.600411 226201 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
I1227 09:17:58.600425 226201 cache.go:107] acquiring lock: {Name:mked1e3f89cbb58c53698baeb61b65b0654307c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.600456 226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1227 09:17:58.600464 226201 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.526µs
I1227 09:17:58.600470 226201 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1227 09:17:58.600476 226201 cache.go:87] Successfully saved all images to host disk.
I1227 09:17:58.621079 226201 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 09:17:58.621100 226201 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 09:17:58.621115 226201 cache.go:243] Successfully downloaded all kic artifacts
I1227 09:17:58.621144 226201 start.go:360] acquireMachinesLock for no-preload-524171: {Name:mkf5fad8426c1227ad56bd7da91d15024fcf5f71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 09:17:58.621208 226201 start.go:364] duration metric: took 44.604µs to acquireMachinesLock for "no-preload-524171"
I1227 09:17:58.621231 226201 start.go:96] Skipping create...Using existing machine configuration
I1227 09:17:58.621241 226201 fix.go:54] fixHost starting:
I1227 09:17:58.621516 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:17:58.639396 226201 fix.go:112] recreateIfNeeded on no-preload-524171: state=Stopped err=<nil>
W1227 09:17:58.639440 226201 fix.go:138] unexpected machine state, will restart: <nil>
I1227 09:17:58.644664 226201 out.go:252] * Restarting existing docker container for "no-preload-524171" ...
I1227 09:17:58.644795 226201 cli_runner.go:164] Run: docker start no-preload-524171
I1227 09:17:58.922378 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:17:58.944063 226201 kic.go:430] container "no-preload-524171" state is running.
I1227 09:17:58.944458 226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
I1227 09:17:58.966641 226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/config.json ...
I1227 09:17:58.966876 226201 machine.go:94] provisionDockerMachine start ...
I1227 09:17:58.966935 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:17:58.994098 226201 main.go:144] libmachine: Using SSH client type: native
I1227 09:17:58.994484 226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1227 09:17:58.994502 226201 main.go:144] libmachine: About to run SSH command:
hostname
I1227 09:17:58.996230 226201 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1227 09:18:02.139960 226201 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-524171
I1227 09:18:02.140068 226201 ubuntu.go:182] provisioning hostname "no-preload-524171"
I1227 09:18:02.140159 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:02.158238 226201 main.go:144] libmachine: Using SSH client type: native
I1227 09:18:02.158569 226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1227 09:18:02.158581 226201 main.go:144] libmachine: About to run SSH command:
sudo hostname no-preload-524171 && echo "no-preload-524171" | sudo tee /etc/hostname
I1227 09:18:02.305841 226201 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-524171
I1227 09:18:02.305952 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:02.324154 226201 main.go:144] libmachine: Using SSH client type: native
I1227 09:18:02.324473 226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1227 09:18:02.324494 226201 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-524171' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-524171/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-524171' | sudo tee -a /etc/hosts;
fi
fi
I1227 09:18:02.464447 226201 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 09:18:02.464549 226201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
I1227 09:18:02.464593 226201 ubuntu.go:190] setting up certificates
I1227 09:18:02.464619 226201 provision.go:84] configureAuth start
I1227 09:18:02.464701 226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
I1227 09:18:02.486454 226201 provision.go:143] copyHostCerts
I1227 09:18:02.486533 226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
I1227 09:18:02.486554 226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
I1227 09:18:02.486634 226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
I1227 09:18:02.486740 226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
I1227 09:18:02.486751 226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
I1227 09:18:02.486779 226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
I1227 09:18:02.486837 226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
I1227 09:18:02.486846 226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
I1227 09:18:02.486871 226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
I1227 09:18:02.486930 226201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.no-preload-524171 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-524171]
I1227 09:18:02.623993 226201 provision.go:177] copyRemoteCerts
I1227 09:18:02.624053 226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 09:18:02.624097 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:02.641312 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:02.740716 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1227 09:18:02.759149 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1227 09:18:02.778722 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 09:18:02.796675 226201 provision.go:87] duration metric: took 332.020144ms to configureAuth
I1227 09:18:02.796700 226201 ubuntu.go:206] setting minikube options for container-runtime
I1227 09:18:02.796895 226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:18:02.796902 226201 machine.go:97] duration metric: took 3.830019014s to provisionDockerMachine
I1227 09:18:02.796910 226201 start.go:293] postStartSetup for "no-preload-524171" (driver="docker")
I1227 09:18:02.796919 226201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 09:18:02.796962 226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 09:18:02.797001 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:02.815376 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:02.915954 226201 ssh_runner.go:195] Run: cat /etc/os-release
I1227 09:18:02.919376 226201 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 09:18:02.919403 226201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 09:18:02.919415 226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
I1227 09:18:02.919470 226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
I1227 09:18:02.919550 226201 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
I1227 09:18:02.919661 226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 09:18:02.927306 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
I1227 09:18:02.944850 226201 start.go:296] duration metric: took 147.9256ms for postStartSetup
I1227 09:18:02.944984 226201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 09:18:02.945030 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:02.962355 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:03.061913 226201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 09:18:03.066978 226201 fix.go:56] duration metric: took 4.445730661s for fixHost
I1227 09:18:03.067020 226201 start.go:83] releasing machines lock for "no-preload-524171", held for 4.445785628s
I1227 09:18:03.067101 226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
I1227 09:18:03.085169 226201 ssh_runner.go:195] Run: cat /version.json
I1227 09:18:03.085206 226201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 09:18:03.085229 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:03.085263 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:03.111386 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:03.112725 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:03.215862 226201 ssh_runner.go:195] Run: systemctl --version
I1227 09:18:03.312166 226201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 09:18:03.316853 226201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 09:18:03.316955 226201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 09:18:03.325177 226201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1227 09:18:03.325245 226201 start.go:496] detecting cgroup driver to use...
I1227 09:18:03.325303 226201 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1227 09:18:03.325367 226201 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 09:18:03.343327 226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 09:18:03.357022 226201 docker.go:218] disabling cri-docker service (if available) ...
I1227 09:18:03.357103 226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 09:18:03.373855 226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 09:18:03.387118 226201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 09:18:03.527687 226201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 09:18:03.635245 226201 docker.go:234] disabling docker service ...
I1227 09:18:03.635357 226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 09:18:03.650672 226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 09:18:03.664005 226201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 09:18:03.777913 226201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 09:18:03.889853 226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 09:18:03.902485 226201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 09:18:03.916709 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 09:18:03.925433 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 09:18:03.934123 226201 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I1227 09:18:03.934217 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1227 09:18:03.942911 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:18:03.951558 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 09:18:03.960253 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 09:18:03.968841 226201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 09:18:03.976733 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 09:18:03.985289 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 09:18:03.993999 226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 09:18:04.003737 226201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 09:18:04.012398 226201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 09:18:04.021558 226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:18:04.129037 226201 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 09:18:04.301734 226201 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 09:18:04.301805 226201 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 09:18:04.305816 226201 start.go:574] Will wait 60s for crictl version
I1227 09:18:04.305937 226201 ssh_runner.go:195] Run: which crictl
I1227 09:18:04.309675 226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 09:18:04.333946 226201 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 09:18:04.334025 226201 ssh_runner.go:195] Run: containerd --version
I1227 09:18:04.353784 226201 ssh_runner.go:195] Run: containerd --version
I1227 09:18:04.376780 226201 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 09:18:04.379820 226201 cli_runner.go:164] Run: docker network inspect no-preload-524171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:18:04.395618 226201 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1227 09:18:04.399460 226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:18:04.409376 226201 kubeadm.go:884] updating cluster {Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 09:18:04.409501 226201 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:18:04.409563 226201 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 09:18:04.438846 226201 containerd.go:635] all images are preloaded for containerd runtime.
I1227 09:18:04.438869 226201 cache_images.go:86] Images are preloaded, skipping loading
I1227 09:18:04.438877 226201 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I1227 09:18:04.438966 226201 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-524171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 09:18:04.439031 226201 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 09:18:04.465516 226201 cni.go:84] Creating CNI manager for ""
I1227 09:18:04.465585 226201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 09:18:04.465635 226201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 09:18:04.465699 226201 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-524171 NodeName:no-preload-524171 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 09:18:04.465865 226201 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-524171"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 09:18:04.465979 226201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 09:18:04.473830 226201 binaries.go:51] Found k8s binaries, skipping transfer
I1227 09:18:04.473907 226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 09:18:04.481422 226201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1227 09:18:04.493700 226201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 09:18:04.506726 226201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
I1227 09:18:04.519456 226201 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1227 09:18:04.523030 226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 09:18:04.532996 226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:18:04.638155 226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:18:04.655671 226201 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171 for IP: 192.168.85.2
I1227 09:18:04.655692 226201 certs.go:195] generating shared ca certs ...
I1227 09:18:04.655708 226201 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:18:04.655867 226201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
I1227 09:18:04.655908 226201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
I1227 09:18:04.655915 226201 certs.go:257] generating profile certs ...
I1227 09:18:04.656032 226201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.key
I1227 09:18:04.656084 226201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.key.580fb977
I1227 09:18:04.656125 226201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.key
I1227 09:18:04.656234 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
W1227 09:18:04.656264 226201 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
I1227 09:18:04.656271 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
I1227 09:18:04.656303 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
I1227 09:18:04.656325 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
I1227 09:18:04.656352 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
I1227 09:18:04.656393 226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
I1227 09:18:04.657021 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 09:18:04.677523 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
I1227 09:18:04.694756 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 09:18:04.712555 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1227 09:18:04.729518 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1227 09:18:04.749526 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 09:18:04.767207 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 09:18:04.792023 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 09:18:04.812544 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 09:18:04.834434 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
I1227 09:18:04.861056 226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
I1227 09:18:04.880555 226201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 09:18:04.909584 226201 ssh_runner.go:195] Run: openssl version
I1227 09:18:04.916024 226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 09:18:04.923337 226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 09:18:04.934637 226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 09:18:04.940662 226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
I1227 09:18:04.940766 226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 09:18:04.985337 226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 09:18:04.992974 226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
I1227 09:18:05.002332 226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
I1227 09:18:05.012484 226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
I1227 09:18:05.018305 226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
I1227 09:18:05.018428 226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
I1227 09:18:05.061382 226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 09:18:05.068981 226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
I1227 09:18:05.076512 226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
I1227 09:18:05.084139 226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
I1227 09:18:05.088106 226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
I1227 09:18:05.088170 226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
I1227 09:18:05.135384 226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 09:18:05.143220 226201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 09:18:05.147353 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1227 09:18:05.189042 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1227 09:18:05.232533 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1227 09:18:05.275982 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1227 09:18:05.318404 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1227 09:18:05.366509 226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1227 09:18:05.435816 226201 kubeadm.go:401] StartCluster: {Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 09:18:05.435909 226201 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 09:18:05.436000 226201 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 09:18:05.485146 226201 cri.go:96] found id: "2ae8d3c66d1207c0eebd2f380dead182121d0e824ac82c8c0009b723c8c4282c"
I1227 09:18:05.485165 226201 cri.go:96] found id: "5aa63df08b73e414d87e2974739bc0f7be6a4215d8262e879f3bc63a59ccce8a"
I1227 09:18:05.485169 226201 cri.go:96] found id: "f124ffe1987aac2609cef749803af5ddf75469757a908e6075b30c6d3170943b"
I1227 09:18:05.485173 226201 cri.go:96] found id: "38065bf9d52701ed4b1494dcd439b8948889c7df7e86e543b37681459e2dbf0c"
I1227 09:18:05.485179 226201 cri.go:96] found id: "3a11c7ef8307a5efe705e860ee3d142f3aad4834ee624e52c2fe7b6d4da29f36"
I1227 09:18:05.485182 226201 cri.go:96] found id: "4738b282a7ad923daf8903fb7015e488c518676001de17bf9e718e7cafe628da"
I1227 09:18:05.485186 226201 cri.go:96] found id: "fc42941264da4d0e2ee7d00a5a1374b1b12d5b77f41d0e50586fc4c6481e6df6"
I1227 09:18:05.485189 226201 cri.go:96] found id: "751aa8ea3c05e00083a87550345bfebe9f06f30ec6aa59634b1b0f573ef9653f"
I1227 09:18:05.485192 226201 cri.go:96] found id: ""
I1227 09:18:05.485249 226201 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1227 09:18:05.509416 226201 cri.go:123] JSON = [{"ociVersion":"1.2.1","id":"0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779/rootfs","created":"2025-12-27T09:18:05.432228982Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-524171_62054dbbefd9c8741e5c32bf10947cc5","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-no-preload-524171","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"62054dbbefd9c8741e5c32bf10947cc5"},"owner":"root"},{"ociVersion":"1.2.1","id":"16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","pid":903,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","io.kubernetes.cri.sandbox-log-directory":"/va
r/log/pods/kube-system_kube-controller-manager-no-preload-524171_e0876fb181906b5451fe5348bc79cc69","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-524171","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e0876fb181906b5451fe5348bc79cc69"},"owner":"root"}]
I1227 09:18:05.509511 226201 cri.go:133] list returned 2 containers
I1227 09:18:05.509527 226201 cri.go:136] container: {ID:0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779 Status:running}
I1227 09:18:05.509549 226201 cri.go:138] skipping 0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779 - not in ps
I1227 09:18:05.509554 226201 cri.go:136] container: {ID:16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659 Status:created}
I1227 09:18:05.509559 226201 cri.go:138] skipping 16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659 - not in ps
I1227 09:18:05.509609 226201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 09:18:05.527100 226201 kubeadm.go:417] found existing configuration files, will attempt cluster restart
I1227 09:18:05.527126 226201 kubeadm.go:598] restartPrimaryControlPlane start ...
I1227 09:18:05.527211 226201 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1227 09:18:05.543807 226201 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1227 09:18:05.544234 226201 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-524171" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig
I1227 09:18:05.544333 226201 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-2451/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-524171" cluster setting kubeconfig missing "no-preload-524171" context setting]
I1227 09:18:05.544597 226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/kubeconfig: {Name:mke3c6b6762542ff27940478b7eeb947283979c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:18:05.545813 226201 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1227 09:18:05.572130 226201 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
I1227 09:18:05.572165 226201 kubeadm.go:602] duration metric: took 45.033003ms to restartPrimaryControlPlane
I1227 09:18:05.572175 226201 kubeadm.go:403] duration metric: took 136.372256ms to StartCluster
I1227 09:18:05.572190 226201 settings.go:142] acquiring lock: {Name:mk6f44443555e6cff1da53c787c3ea2c729d418d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:18:05.572285 226201 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22344-2451/kubeconfig
I1227 09:18:05.572894 226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/kubeconfig: {Name:mke3c6b6762542ff27940478b7eeb947283979c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 09:18:05.573092 226201 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 09:18:05.573450 226201 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1227 09:18:05.573519 226201 addons.go:70] Setting storage-provisioner=true in profile "no-preload-524171"
I1227 09:18:05.573531 226201 addons.go:239] Setting addon storage-provisioner=true in "no-preload-524171"
W1227 09:18:05.573536 226201 addons.go:248] addon storage-provisioner should already be in state true
I1227 09:18:05.573556 226201 host.go:66] Checking if "no-preload-524171" exists ...
I1227 09:18:05.574033 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:18:05.574653 226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:18:05.574745 226201 addons.go:70] Setting default-storageclass=true in profile "no-preload-524171"
I1227 09:18:05.574786 226201 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-524171"
I1227 09:18:05.575066 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:18:05.575561 226201 addons.go:70] Setting dashboard=true in profile "no-preload-524171"
I1227 09:18:05.575580 226201 addons.go:239] Setting addon dashboard=true in "no-preload-524171"
W1227 09:18:05.575587 226201 addons.go:248] addon dashboard should already be in state true
I1227 09:18:05.575609 226201 host.go:66] Checking if "no-preload-524171" exists ...
I1227 09:18:05.576247 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:18:05.577920 226201 addons.go:70] Setting metrics-server=true in profile "no-preload-524171"
I1227 09:18:05.577944 226201 addons.go:239] Setting addon metrics-server=true in "no-preload-524171"
W1227 09:18:05.577952 226201 addons.go:248] addon metrics-server should already be in state true
I1227 09:18:05.578071 226201 host.go:66] Checking if "no-preload-524171" exists ...
I1227 09:18:05.578193 226201 out.go:179] * Verifying Kubernetes components...
I1227 09:18:05.579893 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:18:05.590828 226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 09:18:05.638488 226201 addons.go:239] Setting addon default-storageclass=true in "no-preload-524171"
W1227 09:18:05.638517 226201 addons.go:248] addon default-storageclass should already be in state true
I1227 09:18:05.638556 226201 host.go:66] Checking if "no-preload-524171" exists ...
I1227 09:18:05.640452 226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
I1227 09:18:05.647608 226201 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1227 09:18:05.650099 226201 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1227 09:18:05.653138 226201 out.go:179] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1227 09:18:05.653260 226201 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1227 09:18:05.653271 226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1227 09:18:05.653331 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:05.659205 226201 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1227 09:18:05.659231 226201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1227 09:18:05.659314 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:05.659401 226201 out.go:179] - Using image registry.k8s.io/echoserver:1.4
I1227 09:18:05.670265 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1227 09:18:05.670294 226201 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1227 09:18:05.670400 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:05.696289 226201 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1227 09:18:05.696310 226201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1227 09:18:05.696374 226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
I1227 09:18:05.715258 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:05.742406 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:05.752077 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:05.752578 226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
I1227 09:18:05.891456 226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 09:18:05.955950 226201 node_ready.go:35] waiting up to 6m0s for node "no-preload-524171" to be "Ready" ...
I1227 09:18:06.005569 226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1227 09:18:06.005646 226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1227 09:18:06.092472 226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1227 09:18:06.092567 226201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1227 09:18:06.141249 226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 09:18:06.168253 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1227 09:18:06.168330 226201 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1227 09:18:06.182987 226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1227 09:18:06.197144 226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1227 09:18:06.197169 226201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1227 09:18:06.354968 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1227 09:18:06.354993 226201 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1227 09:18:06.373252 226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1227 09:18:06.435777 226201 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1227 09:18:06.435880 226201 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1227 09:18:06.488362 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1227 09:18:06.488438 226201 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1227 09:18:06.646252 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1227 09:18:06.646325 226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1227 09:18:06.704346 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1227 09:18:06.704421 226201 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1227 09:18:06.748111 226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 09:18:06.752904 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1227 09:18:06.752965 226201 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1227 09:18:06.825777 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1227 09:18:06.825853 226201 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1227 09:18:06.917894 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1227 09:18:06.917967 226201 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1227 09:18:06.976469 226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1227 09:18:06.976541 226201 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1227 09:18:07.014820 226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1227 09:18:09.246739 226201 node_ready.go:49] node "no-preload-524171" is "Ready"
I1227 09:18:09.246767 226201 node_ready.go:38] duration metric: took 3.290765949s for node "no-preload-524171" to be "Ready" ...
I1227 09:18:09.246781 226201 api_server.go:52] waiting for apiserver process to appear ...
I1227 09:18:09.246843 226201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1227 09:18:09.433975 226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.250900421s)
I1227 09:18:11.836267 226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.462926205s)
I1227 09:18:11.836298 226201 addons.go:495] Verifying addon metrics-server=true in "no-preload-524171"
I1227 09:18:11.906063 226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.157854122s)
I1227 09:18:11.906186 226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.891291045s)
I1227 09:18:11.906360 226201 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.659505443s)
I1227 09:18:11.906375 226201 api_server.go:72] duration metric: took 6.333252631s to wait for apiserver process to appear ...
I1227 09:18:11.906381 226201 api_server.go:88] waiting for apiserver healthz status ...
I1227 09:18:11.906398 226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1227 09:18:11.909821 226201 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-524171 addons enable metrics-server
I1227 09:18:11.912808 226201 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I1227 09:18:11.914800 226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1227 09:18:11.914827 226201 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1227 09:18:11.916116 226201 addons.go:530] duration metric: took 6.342664448s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I1227 09:18:12.407142 226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1227 09:18:12.415532 226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1227 09:18:12.415560 226201 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1227 09:18:12.907142 226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1227 09:18:12.915201 226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
ok
I1227 09:18:12.916442 226201 api_server.go:141] control plane version: v1.35.0
I1227 09:18:12.916470 226201 api_server.go:131] duration metric: took 1.010081976s to wait for apiserver health ...
I1227 09:18:12.916483 226201 system_pods.go:43] waiting for kube-system pods to appear ...
I1227 09:18:12.919937 226201 system_pods.go:59] 9 kube-system pods found
I1227 09:18:12.920013 226201 system_pods.go:61] "coredns-7d764666f9-cg99w" [0f8f020a-2432-4428-bbf0-b4448d6f8b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 09:18:12.920050 226201 system_pods.go:61] "etcd-no-preload-524171" [917f850e-7d12-414f-81ef-5e9baebf15e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 09:18:12.920068 226201 system_pods.go:61] "kindnet-fgvj4" [a197f9bf-430f-4070-ae5f-f8d1962f365c] Running
I1227 09:18:12.920077 226201 system_pods.go:61] "kube-apiserver-no-preload-524171" [8be044a4-a7af-4169-a8b8-819d43121f5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1227 09:18:12.920088 226201 system_pods.go:61] "kube-controller-manager-no-preload-524171" [c3e33e0e-da6a-4e43-9071-be14a56d2181] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 09:18:12.920093 226201 system_pods.go:61] "kube-proxy-qpgsj" [17acebe7-2a46-4561-ba4f-c1536076d97a] Running
I1227 09:18:12.920112 226201 system_pods.go:61] "kube-scheduler-no-preload-524171" [a8c0bc4e-1bb8-40d1-a82d-f9bed47c3257] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1227 09:18:12.920124 226201 system_pods.go:61] "metrics-server-5d785b57d4-s7p4z" [b5440fa8-adbb-4d45-b518-89df473a91f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1227 09:18:12.920129 226201 system_pods.go:61] "storage-provisioner" [6dfb3476-03f3-448d-bacb-7bf1502de3b1] Running
I1227 09:18:12.920140 226201 system_pods.go:74] duration metric: took 3.651571ms to wait for pod list to return data ...
I1227 09:18:12.920148 226201 default_sa.go:34] waiting for default service account to be created ...
I1227 09:18:12.922855 226201 default_sa.go:45] found service account: "default"
I1227 09:18:12.922879 226201 default_sa.go:55] duration metric: took 2.725388ms for default service account to be created ...
I1227 09:18:12.922889 226201 system_pods.go:116] waiting for k8s-apps to be running ...
I1227 09:18:12.925646 226201 system_pods.go:86] 9 kube-system pods found
I1227 09:18:12.925682 226201 system_pods.go:89] "coredns-7d764666f9-cg99w" [0f8f020a-2432-4428-bbf0-b4448d6f8b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 09:18:12.925691 226201 system_pods.go:89] "etcd-no-preload-524171" [917f850e-7d12-414f-81ef-5e9baebf15e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 09:18:12.925697 226201 system_pods.go:89] "kindnet-fgvj4" [a197f9bf-430f-4070-ae5f-f8d1962f365c] Running
I1227 09:18:12.925705 226201 system_pods.go:89] "kube-apiserver-no-preload-524171" [8be044a4-a7af-4169-a8b8-819d43121f5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1227 09:18:12.925725 226201 system_pods.go:89] "kube-controller-manager-no-preload-524171" [c3e33e0e-da6a-4e43-9071-be14a56d2181] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 09:18:12.925732 226201 system_pods.go:89] "kube-proxy-qpgsj" [17acebe7-2a46-4561-ba4f-c1536076d97a] Running
I1227 09:18:12.925751 226201 system_pods.go:89] "kube-scheduler-no-preload-524171" [a8c0bc4e-1bb8-40d1-a82d-f9bed47c3257] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1227 09:18:12.925759 226201 system_pods.go:89] "metrics-server-5d785b57d4-s7p4z" [b5440fa8-adbb-4d45-b518-89df473a91f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1227 09:18:12.925767 226201 system_pods.go:89] "storage-provisioner" [6dfb3476-03f3-448d-bacb-7bf1502de3b1] Running
I1227 09:18:12.925775 226201 system_pods.go:126] duration metric: took 2.880213ms to wait for k8s-apps to be running ...
I1227 09:18:12.925785 226201 system_svc.go:44] waiting for kubelet service to be running ....
I1227 09:18:12.925841 226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 09:18:12.938773 226201 system_svc.go:56] duration metric: took 12.980326ms WaitForService to wait for kubelet
I1227 09:18:12.938799 226201 kubeadm.go:587] duration metric: took 7.365674984s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 09:18:12.938817 226201 node_conditions.go:102] verifying NodePressure condition ...
I1227 09:18:12.941812 226201 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1227 09:18:12.941845 226201 node_conditions.go:123] node cpu capacity is 2
I1227 09:18:12.941858 226201 node_conditions.go:105] duration metric: took 3.036654ms to run NodePressure ...
I1227 09:18:12.941872 226201 start.go:242] waiting for startup goroutines ...
I1227 09:18:12.941879 226201 start.go:247] waiting for cluster config update ...
I1227 09:18:12.941890 226201 start.go:256] writing updated cluster config ...
I1227 09:18:12.942167 226201 ssh_runner.go:195] Run: rm -f paused
I1227 09:18:12.945723 226201 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1227 09:18:12.949088 226201 pod_ready.go:83] waiting for pod "coredns-7d764666f9-cg99w" in "kube-system" namespace to be "Ready" or be gone ...
W1227 09:18:14.955019 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:17.455131 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:19.455561 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:21.954337 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:23.955555 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:26.454489 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:28.957891 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:31.454255 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:33.457988 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:35.955132 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:37.955236 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:40.454416 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:42.457590 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
W1227 09:18:44.954378 226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
I1227 09:18:45.455320 226201 pod_ready.go:94] pod "coredns-7d764666f9-cg99w" is "Ready"
I1227 09:18:45.455349 226201 pod_ready.go:86] duration metric: took 32.50619442s for pod "coredns-7d764666f9-cg99w" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.458557 226201 pod_ready.go:83] waiting for pod "etcd-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.463288 226201 pod_ready.go:94] pod "etcd-no-preload-524171" is "Ready"
I1227 09:18:45.463314 226201 pod_ready.go:86] duration metric: took 4.734032ms for pod "etcd-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.465760 226201 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.470571 226201 pod_ready.go:94] pod "kube-apiserver-no-preload-524171" is "Ready"
I1227 09:18:45.470649 226201 pod_ready.go:86] duration metric: took 4.862272ms for pod "kube-apiserver-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.473634 226201 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.653367 226201 pod_ready.go:94] pod "kube-controller-manager-no-preload-524171" is "Ready"
I1227 09:18:45.653398 226201 pod_ready.go:86] duration metric: took 179.682931ms for pod "kube-controller-manager-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:45.853767 226201 pod_ready.go:83] waiting for pod "kube-proxy-qpgsj" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:46.253384 226201 pod_ready.go:94] pod "kube-proxy-qpgsj" is "Ready"
I1227 09:18:46.253454 226201 pod_ready.go:86] duration metric: took 399.662014ms for pod "kube-proxy-qpgsj" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:46.453574 226201 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:46.852842 226201 pod_ready.go:94] pod "kube-scheduler-no-preload-524171" is "Ready"
I1227 09:18:46.852873 226201 pod_ready.go:86] duration metric: took 399.274053ms for pod "kube-scheduler-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
I1227 09:18:46.852887 226201 pod_ready.go:40] duration metric: took 33.907134524s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1227 09:18:46.907402 226201 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I1227 09:18:46.910345 226201 out.go:203]
W1227 09:18:46.913170 226201 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
I1227 09:18:46.915911 226201 out.go:179] - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
I1227 09:18:46.918772 226201 out.go:179] * Done! kubectl is now configured to use "no-preload-524171" cluster and "default" namespace by default
I1227 09:19:03.204484 204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000488648s
I1227 09:19:03.204509 204666 kubeadm.go:319]
I1227 09:19:03.204566 204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 09:19:03.204600 204666 kubeadm.go:319] - The kubelet is not running
I1227 09:19:03.204705 204666 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 09:19:03.204710 204666 kubeadm.go:319]
I1227 09:19:03.204814 204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 09:19:03.204846 204666 kubeadm.go:319] - 'systemctl status kubelet'
I1227 09:19:03.204877 204666 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 09:19:03.204881 204666 kubeadm.go:319]
I1227 09:19:03.217785 204666 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 09:19:03.218533 204666 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 09:19:03.218725 204666 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 09:19:03.219191 204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1227 09:19:03.219198 204666 kubeadm.go:319]
I1227 09:19:03.219319 204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 09:19:03.219385 204666 kubeadm.go:403] duration metric: took 8m7.271122438s to StartCluster
I1227 09:19:03.219439 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 09:19:03.219506 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 09:19:03.293507 204666 cri.go:96] found id: ""
I1227 09:19:03.293587 204666 logs.go:282] 0 containers: []
W1227 09:19:03.293612 204666 logs.go:284] No container was found matching "kube-apiserver"
I1227 09:19:03.293653 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 09:19:03.293737 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 09:19:03.328940 204666 cri.go:96] found id: ""
I1227 09:19:03.328973 204666 logs.go:282] 0 containers: []
W1227 09:19:03.328982 204666 logs.go:284] No container was found matching "etcd"
I1227 09:19:03.328990 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 09:19:03.329064 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 09:19:03.374166 204666 cri.go:96] found id: ""
I1227 09:19:03.374236 204666 logs.go:282] 0 containers: []
W1227 09:19:03.374260 204666 logs.go:284] No container was found matching "coredns"
I1227 09:19:03.374286 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 09:19:03.374375 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 09:19:03.422359 204666 cri.go:96] found id: ""
I1227 09:19:03.422395 204666 logs.go:282] 0 containers: []
W1227 09:19:03.422405 204666 logs.go:284] No container was found matching "kube-scheduler"
I1227 09:19:03.422411 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 09:19:03.422486 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 09:19:03.481975 204666 cri.go:96] found id: ""
I1227 09:19:03.482015 204666 logs.go:282] 0 containers: []
W1227 09:19:03.482024 204666 logs.go:284] No container was found matching "kube-proxy"
I1227 09:19:03.482030 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 09:19:03.482095 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 09:19:03.538264 204666 cri.go:96] found id: ""
I1227 09:19:03.538290 204666 logs.go:282] 0 containers: []
W1227 09:19:03.538300 204666 logs.go:284] No container was found matching "kube-controller-manager"
I1227 09:19:03.538307 204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 09:19:03.538373 204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 09:19:03.592079 204666 cri.go:96] found id: ""
I1227 09:19:03.592102 204666 logs.go:282] 0 containers: []
W1227 09:19:03.592110 204666 logs.go:284] No container was found matching "kindnet"
I1227 09:19:03.592121 204666 logs.go:123] Gathering logs for describe nodes ...
I1227 09:19:03.592134 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 09:19:03.692446 204666 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 09:19:03.683947 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.684806 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686564 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686877 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.688421 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 09:19:03.683947 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.684806 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686564 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.686877 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:03.688421 4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 09:19:03.692475 204666 logs.go:123] Gathering logs for containerd ...
I1227 09:19:03.692487 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 09:19:03.740848 204666 logs.go:123] Gathering logs for container status ...
I1227 09:19:03.740925 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1227 09:19:03.782208 204666 logs.go:123] Gathering logs for kubelet ...
I1227 09:19:03.782242 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 09:19:03.874946 204666 logs.go:123] Gathering logs for dmesg ...
I1227 09:19:03.874978 204666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1227 09:19:03.889356 204666 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.889407 204666 out.go:285] *
W1227 09:19:03.889455 204666 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.889475 204666 out.go:285] *
W1227 09:19:03.889727 204666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 09:19:03.894675 204666 out.go:203]
W1227 09:19:03.897830 204666 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000488648s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1227 09:19:03.897891 204666 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 09:19:03.897912 204666 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 09:19:03.901086 204666 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222676117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222689918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222723092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222737681Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222746641Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222758268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222767417Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222779355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222792352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222828939Z" level=info msg="Connect containerd service"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.223105760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.223640495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.244895830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.244995835Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.245339987Z" level=info msg="Start subscribing containerd event"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.245415484Z" level=info msg="Start recovering state"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283869264Z" level=info msg="Start event monitor"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283925355Z" level=info msg="Start cni network conf syncer for default"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283934692Z" level=info msg="Start streaming server"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283944325Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283953860Z" level=info msg="runtime interface starting up..."
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283960424Z" level=info msg="starting plugins..."
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.284136205Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.284261745Z" level=info msg="containerd successfully booted in 0.082382s"
Dec 27 09:10:54 force-systemd-flag-310604 systemd[1]: Started containerd.service - containerd container runtime.
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 09:19:05.796380 4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:05.797337 4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:05.799226 4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:05.799776 4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 09:19:05.801378 4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec27 08:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.481009] kauditd_printk_skb: 39 callbacks suppressed
[Dec27 08:29] hrtimer: interrupt took 43410871 ns
==> kernel <==
09:19:05 up 1:01, 0 user, load average: 2.16, 1.89, 2.01
Linux force-systemd-flag-310604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:02 force-systemd-flag-310604 kubelet[4764]: E1227 09:19:02.792050 4764 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:03 force-systemd-flag-310604 kubelet[4805]: E1227 09:19:03.570849 4805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:04 force-systemd-flag-310604 kubelet[4850]: E1227 09:19:04.350937 4850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 09:19:05 force-systemd-flag-310604 kubelet[4892]: E1227 09:19:05.421410 4892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-310604 -n force-systemd-flag-310604
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-310604 -n force-systemd-flag-310604: exit status 6 (522.261034ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 09:19:06.499840 230836 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-310604" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-310604" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-310604" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-310604
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-310604: (2.182749384s)
--- FAIL: TestForceSystemdFlag (505.94s)