=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E1227 20:42:51.829119 302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m21.056914626s)
-- stdout --
* [force-systemd-flag-875839] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22332
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-875839" primary control-plane node in "force-systemd-flag-875839" cluster
* Pulling base image v0.0.48-1766570851-22316 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I1227 20:42:44.450614 512816 out.go:360] Setting OutFile to fd 1 ...
I1227 20:42:44.450748 512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:42:44.450759 512816 out.go:374] Setting ErrFile to fd 2...
I1227 20:42:44.450765 512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:42:44.451046 512816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:42:44.451537 512816 out.go:368] Setting JSON to false
I1227 20:42:44.452468 512816 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8716,"bootTime":1766859449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1227 20:42:44.452539 512816 start.go:143] virtualization:
I1227 20:42:44.456098 512816 out.go:179] * [force-systemd-flag-875839] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 20:42:44.460810 512816 out.go:179] - MINIKUBE_LOCATION=22332
I1227 20:42:44.460944 512816 notify.go:221] Checking for updates...
I1227 20:42:44.467481 512816 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 20:42:44.470760 512816 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
I1227 20:42:44.474017 512816 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
I1227 20:42:44.477177 512816 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 20:42:44.480226 512816 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 20:42:44.483786 512816 config.go:182] Loaded profile config "force-systemd-env-857112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:42:44.483901 512816 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 20:42:44.514242 512816 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 20:42:44.514368 512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:42:44.600674 512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.590030356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 20:42:44.600784 512816 docker.go:319] overlay module found
I1227 20:42:44.603988 512816 out.go:179] * Using the docker driver based on user configuration
I1227 20:42:44.606895 512816 start.go:309] selected driver: docker
I1227 20:42:44.606918 512816 start.go:928] validating driver "docker" against <nil>
I1227 20:42:44.606938 512816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 20:42:44.607721 512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:42:44.660643 512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.65175192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 20:42:44.660805 512816 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1227 20:42:44.661029 512816 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1227 20:42:44.663983 512816 out.go:179] * Using Docker driver with root privileges
I1227 20:42:44.666777 512816 cni.go:84] Creating CNI manager for ""
I1227 20:42:44.666837 512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 20:42:44.666853 512816 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1227 20:42:44.666931 512816 start.go:353] cluster config:
{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:42:44.670122 512816 out.go:179] * Starting "force-systemd-flag-875839" primary control-plane node in "force-systemd-flag-875839" cluster
I1227 20:42:44.673023 512816 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 20:42:44.675977 512816 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 20:42:44.678899 512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 20:42:44.678926 512816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 20:42:44.678947 512816 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1227 20:42:44.678957 512816 cache.go:65] Caching tarball of preloaded images
I1227 20:42:44.679037 512816 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 20:42:44.679046 512816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1227 20:42:44.679152 512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
I1227 20:42:44.679204 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json: {Name:mk226d5712d36dc79e3bc51dc29625caf226ee6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:44.698707 512816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 20:42:44.698733 512816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 20:42:44.698766 512816 cache.go:243] Successfully downloaded all kic artifacts
I1227 20:42:44.698799 512816 start.go:360] acquireMachinesLock for force-systemd-flag-875839: {Name:mka1cb79a66dbff1223f12a6e0653c935a407a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 20:42:44.698917 512816 start.go:364] duration metric: took 96.443µs to acquireMachinesLock for "force-systemd-flag-875839"
I1227 20:42:44.698951 512816 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 20:42:44.699019 512816 start.go:125] createHost starting for "" (driver="docker")
I1227 20:42:44.702439 512816 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1227 20:42:44.702717 512816 start.go:159] libmachine.API.Create for "force-systemd-flag-875839" (driver="docker")
I1227 20:42:44.702756 512816 client.go:173] LocalClient.Create starting
I1227 20:42:44.702822 512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem
I1227 20:42:44.702861 512816 main.go:144] libmachine: Decoding PEM data...
I1227 20:42:44.702888 512816 main.go:144] libmachine: Parsing certificate...
I1227 20:42:44.702941 512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem
I1227 20:42:44.702963 512816 main.go:144] libmachine: Decoding PEM data...
I1227 20:42:44.702975 512816 main.go:144] libmachine: Parsing certificate...
I1227 20:42:44.703517 512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:42:44.719292 512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:42:44.719369 512816 network_create.go:284] running [docker network inspect force-systemd-flag-875839] to gather additional debugging logs...
I1227 20:42:44.719387 512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839
W1227 20:42:44.733398 512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 returned with exit code 1
I1227 20:42:44.733430 512816 network_create.go:287] error running [docker network inspect force-systemd-flag-875839]: docker network inspect force-systemd-flag-875839: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-875839 not found
I1227 20:42:44.733442 512816 network_create.go:289] output of [docker network inspect force-systemd-flag-875839]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-875839 not found
** /stderr **
I1227 20:42:44.733536 512816 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:42:44.750679 512816 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-39a3264d8f81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:08:2a:c8:87:59} reservation:<nil>}
I1227 20:42:44.751059 512816 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad751755a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:9d:74:07:ce:ba} reservation:<nil>}
I1227 20:42:44.751350 512816 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f84ef5e3062f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:ef:60:e2:0e:e4} reservation:<nil>}
I1227 20:42:44.751800 512816 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47a80}
I1227 20:42:44.751824 512816 network_create.go:124] attempt to create docker network force-systemd-flag-875839 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1227 20:42:44.751879 512816 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-875839 force-systemd-flag-875839
I1227 20:42:44.817033 512816 network_create.go:108] docker network force-systemd-flag-875839 192.168.76.0/24 created
I1227 20:42:44.817068 512816 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-875839" container
I1227 20:42:44.817162 512816 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1227 20:42:44.833900 512816 cli_runner.go:164] Run: docker volume create force-systemd-flag-875839 --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true
I1227 20:42:44.855305 512816 oci.go:103] Successfully created a docker volume force-systemd-flag-875839
I1227 20:42:44.855397 512816 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-875839-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --entrypoint /usr/bin/test -v force-systemd-flag-875839:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
I1227 20:42:45.520564 512816 oci.go:107] Successfully prepared a docker volume force-systemd-flag-875839
I1227 20:42:45.520637 512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 20:42:45.520651 512816 kic.go:194] Starting extracting preloaded images to volume ...
I1227 20:42:45.520724 512816 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
I1227 20:42:49.411447 512816 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.890669583s)
I1227 20:42:49.411480 512816 kic.go:203] duration metric: took 3.890825481s to extract preloaded images to volume ...
W1227 20:42:49.411625 512816 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1227 20:42:49.411780 512816 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1227 20:42:49.466802 512816 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-875839 --name force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-875839 --network force-systemd-flag-875839 --ip 192.168.76.2 --volume force-systemd-flag-875839:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
I1227 20:42:49.764752 512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Running}}
I1227 20:42:49.794580 512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
I1227 20:42:49.818888 512816 cli_runner.go:164] Run: docker exec force-systemd-flag-875839 stat /var/lib/dpkg/alternatives/iptables
I1227 20:42:49.884825 512816 oci.go:144] the created container "force-systemd-flag-875839" has a running status.
I1227 20:42:49.884858 512816 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa...
I1227 20:42:50.331141 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1227 20:42:50.331230 512816 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1227 20:42:50.354044 512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
I1227 20:42:50.377426 512816 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1227 20:42:50.377459 512816 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-875839 chown docker:docker /home/docker/.ssh/authorized_keys]
I1227 20:42:50.420652 512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
I1227 20:42:50.438519 512816 machine.go:94] provisionDockerMachine start ...
I1227 20:42:50.438612 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:50.456377 512816 main.go:144] libmachine: Using SSH client type: native
I1227 20:42:50.456728 512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33416 <nil> <nil>}
I1227 20:42:50.456744 512816 main.go:144] libmachine: About to run SSH command:
hostname
I1227 20:42:50.457445 512816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1227 20:42:53.598911 512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
I1227 20:42:53.598937 512816 ubuntu.go:182] provisioning hostname "force-systemd-flag-875839"
I1227 20:42:53.599044 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:53.617338 512816 main.go:144] libmachine: Using SSH client type: native
I1227 20:42:53.617662 512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33416 <nil> <nil>}
I1227 20:42:53.617679 512816 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-875839 && echo "force-systemd-flag-875839" | sudo tee /etc/hostname
I1227 20:42:53.764333 512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
I1227 20:42:53.764479 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:53.782984 512816 main.go:144] libmachine: Using SSH client type: native
I1227 20:42:53.783321 512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33416 <nil> <nil>}
I1227 20:42:53.783352 512816 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-875839' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-875839/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-875839' | sudo tee -a /etc/hosts;
fi
fi
I1227 20:42:53.923458 512816 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 20:42:53.923486 512816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
I1227 20:42:53.923554 512816 ubuntu.go:190] setting up certificates
I1227 20:42:53.923579 512816 provision.go:84] configureAuth start
I1227 20:42:53.923657 512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
I1227 20:42:53.941558 512816 provision.go:143] copyHostCerts
I1227 20:42:53.941608 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
I1227 20:42:53.941644 512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
I1227 20:42:53.941656 512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
I1227 20:42:53.941740 512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
I1227 20:42:53.941834 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
I1227 20:42:53.941860 512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
I1227 20:42:53.941879 512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
I1227 20:42:53.941908 512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
I1227 20:42:53.941966 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
I1227 20:42:53.941987 512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
I1227 20:42:53.941997 512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
I1227 20:42:53.942022 512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
I1227 20:42:53.942086 512816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-875839 san=[127.0.0.1 192.168.76.2 force-systemd-flag-875839 localhost minikube]
I1227 20:42:54.202929 512816 provision.go:177] copyRemoteCerts
I1227 20:42:54.202994 512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 20:42:54.203044 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:54.221943 512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
I1227 20:42:54.321588 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1227 20:42:54.321656 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 20:42:54.343016 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem -> /etc/docker/server.pem
I1227 20:42:54.343080 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1227 20:42:54.360298 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1227 20:42:54.360375 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 20:42:54.377941 512816 provision.go:87] duration metric: took 454.325341ms to configureAuth
I1227 20:42:54.377969 512816 ubuntu.go:206] setting minikube options for container-runtime
I1227 20:42:54.378138 512816 config.go:182] Loaded profile config "force-systemd-flag-875839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:42:54.378151 512816 machine.go:97] duration metric: took 3.939607712s to provisionDockerMachine
I1227 20:42:54.378158 512816 client.go:176] duration metric: took 9.675390037s to LocalClient.Create
I1227 20:42:54.378178 512816 start.go:167] duration metric: took 9.675461349s to libmachine.API.Create "force-systemd-flag-875839"
I1227 20:42:54.378187 512816 start.go:293] postStartSetup for "force-systemd-flag-875839" (driver="docker")
I1227 20:42:54.378196 512816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 20:42:54.378248 512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 20:42:54.378289 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:54.394904 512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
I1227 20:42:54.495529 512816 ssh_runner.go:195] Run: cat /etc/os-release
I1227 20:42:54.498962 512816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 20:42:54.498995 512816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 20:42:54.499008 512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
I1227 20:42:54.499064 512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
I1227 20:42:54.499159 512816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
I1227 20:42:54.499172 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /etc/ssl/certs/3025412.pem
I1227 20:42:54.499303 512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 20:42:54.507013 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
I1227 20:42:54.524308 512816 start.go:296] duration metric: took 146.106071ms for postStartSetup
I1227 20:42:54.524674 512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
I1227 20:42:54.541545 512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
I1227 20:42:54.541820 512816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 20:42:54.541868 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:54.558475 512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
I1227 20:42:54.656227 512816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 20:42:54.660890 512816 start.go:128] duration metric: took 9.961854464s to createHost
I1227 20:42:54.660916 512816 start.go:83] releasing machines lock for "force-systemd-flag-875839", held for 9.961983524s
I1227 20:42:54.661038 512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
I1227 20:42:54.678050 512816 ssh_runner.go:195] Run: cat /version.json
I1227 20:42:54.678108 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:54.678353 512816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 20:42:54.678414 512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
I1227 20:42:54.697205 512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
I1227 20:42:54.698536 512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
I1227 20:42:54.790825 512816 ssh_runner.go:195] Run: systemctl --version
I1227 20:42:54.890335 512816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 20:42:54.894628 512816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 20:42:54.894703 512816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 20:42:54.922183 512816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1227 20:42:54.922205 512816 start.go:496] detecting cgroup driver to use...
I1227 20:42:54.922220 512816 start.go:500] using "systemd" cgroup driver as enforced via flags
I1227 20:42:54.922274 512816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 20:42:54.937492 512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 20:42:54.950607 512816 docker.go:218] disabling cri-docker service (if available) ...
I1227 20:42:54.950719 512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 20:42:54.968539 512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 20:42:54.987404 512816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 20:42:55.144395 512816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 20:42:55.270151 512816 docker.go:234] disabling docker service ...
I1227 20:42:55.270245 512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 20:42:55.293254 512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 20:42:55.307641 512816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 20:42:55.428488 512816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 20:42:55.544420 512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 20:42:55.556970 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 20:42:55.572425 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 20:42:55.581870 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 20:42:55.591038 512816 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1227 20:42:55.591152 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1227 20:42:55.600400 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:42:55.609307 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 20:42:55.618091 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:42:55.627102 512816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 20:42:55.635238 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 20:42:55.644259 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 20:42:55.653590 512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 20:42:55.662844 512816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 20:42:55.670803 512816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 20:42:55.678906 512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:42:55.792508 512816 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 20:42:55.925141 512816 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 20:42:55.925261 512816 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 20:42:55.929276 512816 start.go:574] Will wait 60s for crictl version
I1227 20:42:55.929388 512816 ssh_runner.go:195] Run: which crictl
I1227 20:42:55.932931 512816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 20:42:55.957058 512816 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 20:42:55.957181 512816 ssh_runner.go:195] Run: containerd --version
I1227 20:42:55.979962 512816 ssh_runner.go:195] Run: containerd --version
I1227 20:42:56.007149 512816 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 20:42:56.010308 512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:42:56.027937 512816 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1227 20:42:56.032126 512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:42:56.043260 512816 kubeadm.go:884] updating cluster {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 20:42:56.043408 512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 20:42:56.043480 512816 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 20:42:56.072941 512816 containerd.go:635] all images are preloaded for containerd runtime.
I1227 20:42:56.072967 512816 containerd.go:542] Images already preloaded, skipping extraction
I1227 20:42:56.073040 512816 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 20:42:56.098189 512816 containerd.go:635] all images are preloaded for containerd runtime.
I1227 20:42:56.098216 512816 cache_images.go:86] Images are preloaded, skipping loading
I1227 20:42:56.098225 512816 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I1227 20:42:56.098317 512816 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-875839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 20:42:56.098386 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 20:42:56.123756 512816 cni.go:84] Creating CNI manager for ""
I1227 20:42:56.123781 512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 20:42:56.123798 512816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 20:42:56.123827 512816 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-875839 NodeName:force-systemd-flag-875839 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 20:42:56.123946 512816 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-875839"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 20:42:56.124019 512816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 20:42:56.133114 512816 binaries.go:51] Found k8s binaries, skipping transfer
I1227 20:42:56.133224 512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 20:42:56.141401 512816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1227 20:42:56.157153 512816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 20:42:56.172136 512816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1227 20:42:56.185664 512816 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1227 20:42:56.189569 512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:42:56.200285 512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:42:56.310897 512816 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 20:42:56.330461 512816 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839 for IP: 192.168.76.2
I1227 20:42:56.330485 512816 certs.go:195] generating shared ca certs ...
I1227 20:42:56.330501 512816 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:56.330640 512816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
I1227 20:42:56.330697 512816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
I1227 20:42:56.330709 512816 certs.go:257] generating profile certs ...
I1227 20:42:56.330767 512816 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key
I1227 20:42:56.330784 512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt with IP's: []
I1227 20:42:56.654113 512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt ...
I1227 20:42:56.654148 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt: {Name:mk690272e7c9732b7460196a75d46ce521525785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:56.654393 512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key ...
I1227 20:42:56.654411 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key: {Name:mkc39b22fbff4b40897d4f98a3d62c6f55391f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:56.654517 512816 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1
I1227 20:42:56.654538 512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1227 20:42:56.834765 512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 ...
I1227 20:42:56.834804 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1: {Name:mkc9aaa28a12a38cdd436242cc98ebbe1035831f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:56.834991 512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 ...
I1227 20:42:56.835006 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1: {Name:mk3594c59348fecf67f0f33d24079612f39e8847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:56.835098 512816 certs.go:382] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt
I1227 20:42:56.835196 512816 certs.go:386] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key
I1227 20:42:56.835265 512816 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key
I1227 20:42:56.835286 512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt with IP's: []
I1227 20:42:57.497782 512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt ...
I1227 20:42:57.497816 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt: {Name:mk6c13ddc40f97cd4770101e7d4b970e00fe21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:57.498023 512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key ...
I1227 20:42:57.498038 512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key: {Name:mk60c7e4a1d2a1da5fcd88dbfb787475edf7630f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:42:57.498129 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1227 20:42:57.498152 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1227 20:42:57.498166 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1227 20:42:57.498182 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1227 20:42:57.498197 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1227 20:42:57.498209 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1227 20:42:57.498226 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1227 20:42:57.498241 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1227 20:42:57.498302 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
W1227 20:42:57.498346 512816 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
I1227 20:42:57.498360 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
I1227 20:42:57.498388 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
I1227 20:42:57.498416 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
I1227 20:42:57.498445 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
I1227 20:42:57.498495 512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
I1227 20:42:57.498530 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /usr/share/ca-certificates/3025412.pem
I1227 20:42:57.498552 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1227 20:42:57.498567 512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem -> /usr/share/ca-certificates/302541.pem
I1227 20:42:57.499202 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 20:42:57.518888 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 20:42:57.541602 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 20:42:57.560371 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 20:42:57.578574 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1227 20:42:57.596428 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1227 20:42:57.614059 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 20:42:57.631746 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1227 20:42:57.649051 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
I1227 20:42:57.666899 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 20:42:57.685179 512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
I1227 20:42:57.704185 512816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 20:42:57.717788 512816 ssh_runner.go:195] Run: openssl version
I1227 20:42:57.724506 512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
I1227 20:42:57.732186 512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
I1227 20:42:57.739774 512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
I1227 20:42:57.743710 512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
I1227 20:42:57.743773 512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
I1227 20:42:57.785241 512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 20:42:57.792974 512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/302541.pem /etc/ssl/certs/51391683.0
I1227 20:42:57.800846 512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
I1227 20:42:57.810694 512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
I1227 20:42:57.819628 512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
I1227 20:42:57.825369 512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
I1227 20:42:57.825452 512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
I1227 20:42:57.867666 512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 20:42:57.875568 512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3025412.pem /etc/ssl/certs/3ec20f2e.0
I1227 20:42:57.882973 512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 20:42:57.890507 512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 20:42:57.898264 512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 20:42:57.902159 512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
I1227 20:42:57.902227 512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 20:42:57.943302 512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 20:42:57.950884 512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1227 20:42:57.958222 512816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 20:42:57.961793 512816 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1227 20:42:57.961846 512816 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:42:57.961921 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 20:42:57.961986 512816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 20:42:57.992506 512816 cri.go:96] found id: ""
I1227 20:42:57.992583 512816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 20:42:58.003253 512816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1227 20:42:58.011987 512816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 20:42:58.012081 512816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 20:42:58.020896 512816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 20:42:58.020916 512816 kubeadm.go:158] found existing configuration files:
I1227 20:42:58.020969 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 20:42:58.030325 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 20:42:58.030399 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 20:42:58.039358 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 20:42:58.049599 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 20:42:58.049713 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 20:42:58.058561 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 20:42:58.068422 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 20:42:58.068537 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 20:42:58.077497 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 20:42:58.087253 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 20:42:58.087372 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 20:42:58.096312 512816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 20:42:58.136339 512816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 20:42:58.136633 512816 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 20:42:58.210092 512816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 20:42:58.210244 512816 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 20:42:58.210334 512816 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 20:42:58.210426 512816 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 20:42:58.210510 512816 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 20:42:58.210589 512816 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 20:42:58.210671 512816 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 20:42:58.210755 512816 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 20:42:58.210837 512816 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 20:42:58.210918 512816 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 20:42:58.211026 512816 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 20:42:58.211119 512816 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 20:42:58.277645 512816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 20:42:58.277833 512816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 20:42:58.277971 512816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 20:42:58.283796 512816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 20:42:58.290858 512816 out.go:252] - Generating certificates and keys ...
I1227 20:42:58.291030 512816 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 20:42:58.291136 512816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 20:42:58.557075 512816 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1227 20:42:58.748413 512816 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1227 20:42:58.793614 512816 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1227 20:42:59.304343 512816 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1227 20:42:59.833617 512816 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1227 20:42:59.834012 512816 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 20:43:00.429800 512816 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1227 20:43:00.430239 512816 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1227 20:43:00.529822 512816 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1227 20:43:01.296650 512816 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1227 20:43:01.612939 512816 kubeadm.go:319] [certs] Generating "sa" key and public key
I1227 20:43:01.613240 512816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 20:43:01.833117 512816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 20:43:02.012700 512816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 20:43:02.166458 512816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 20:43:02.299475 512816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 20:43:02.455123 512816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 20:43:02.456053 512816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 20:43:02.458808 512816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 20:43:02.462661 512816 out.go:252] - Booting up control plane ...
I1227 20:43:02.462775 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 20:43:02.462871 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 20:43:02.462950 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 20:43:02.480678 512816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 20:43:02.481005 512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 20:43:02.489220 512816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 20:43:02.489577 512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 20:43:02.489803 512816 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 20:43:02.667653 512816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 20:43:02.667780 512816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 20:47:02.664377 512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000907971s
I1227 20:47:02.664410 512816 kubeadm.go:319]
I1227 20:47:02.664468 512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:47:02.664510 512816 kubeadm.go:319] - The kubelet is not running
I1227 20:47:02.664624 512816 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:47:02.664635 512816 kubeadm.go:319]
I1227 20:47:02.664740 512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:47:02.664776 512816 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:47:02.664807 512816 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:47:02.664816 512816 kubeadm.go:319]
I1227 20:47:02.679580 512816 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 20:47:02.680004 512816 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 20:47:02.680112 512816 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:47:02.680529 512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1227 20:47:02.680542 512816 kubeadm.go:319]
I1227 20:47:02.680639 512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1227 20:47:02.680748 512816 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000907971s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000907971s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1227 20:47:02.680822 512816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1227 20:47:03.181331 512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 20:47:03.200995 512816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1227 20:47:03.201068 512816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1227 20:47:03.212061 512816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1227 20:47:03.212087 512816 kubeadm.go:158] found existing configuration files:
I1227 20:47:03.212138 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1227 20:47:03.227054 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1227 20:47:03.227128 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1227 20:47:03.236695 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1227 20:47:03.246584 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1227 20:47:03.246660 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1227 20:47:03.256088 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1227 20:47:03.265760 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1227 20:47:03.265828 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1227 20:47:03.274961 512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1227 20:47:03.284859 512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1227 20:47:03.284927 512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1227 20:47:03.295924 512816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1227 20:47:03.349564 512816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1227 20:47:03.349626 512816 kubeadm.go:319] [preflight] Running pre-flight checks
I1227 20:47:03.462446 512816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1227 20:47:03.462536 512816 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1227 20:47:03.462580 512816 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1227 20:47:03.462630 512816 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1227 20:47:03.462683 512816 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1227 20:47:03.462733 512816 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1227 20:47:03.462790 512816 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1227 20:47:03.462843 512816 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1227 20:47:03.462895 512816 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1227 20:47:03.462945 512816 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1227 20:47:03.462998 512816 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1227 20:47:03.463048 512816 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1227 20:47:03.551609 512816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1227 20:47:03.551726 512816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1227 20:47:03.551823 512816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1227 20:47:03.563738 512816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1227 20:47:03.572951 512816 out.go:252] - Generating certificates and keys ...
I1227 20:47:03.573047 512816 kubeadm.go:319] [certs] Using existing ca certificate authority
I1227 20:47:03.573121 512816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1227 20:47:03.573205 512816 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1227 20:47:03.573270 512816 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1227 20:47:03.573344 512816 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1227 20:47:03.573402 512816 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1227 20:47:03.573469 512816 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1227 20:47:03.573534 512816 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1227 20:47:03.573612 512816 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1227 20:47:03.573689 512816 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1227 20:47:03.573730 512816 kubeadm.go:319] [certs] Using the existing "sa" key
I1227 20:47:03.573790 512816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1227 20:47:03.910169 512816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1227 20:47:04.085625 512816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1227 20:47:04.204209 512816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1227 20:47:04.500994 512816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1227 20:47:04.759578 512816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1227 20:47:04.766015 512816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1227 20:47:04.773293 512816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1227 20:47:04.777627 512816 out.go:252] - Booting up control plane ...
I1227 20:47:04.777736 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1227 20:47:04.777814 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1227 20:47:04.777883 512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1227 20:47:04.799331 512816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1227 20:47:04.799441 512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1227 20:47:04.809862 512816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1227 20:47:04.809965 512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1227 20:47:04.810006 512816 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1227 20:47:04.984729 512816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1227 20:47:04.984850 512816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1227 20:51:04.984612 512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000227461s
I1227 20:51:04.984646 512816 kubeadm.go:319]
I1227 20:51:04.984705 512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:51:04.984745 512816 kubeadm.go:319] - The kubelet is not running
I1227 20:51:04.984855 512816 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:51:04.984862 512816 kubeadm.go:319]
I1227 20:51:04.984968 512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:51:04.985005 512816 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:51:04.985041 512816 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:51:04.985048 512816 kubeadm.go:319]
I1227 20:51:04.990635 512816 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 20:51:04.991126 512816 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 20:51:04.991276 512816 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:51:04.991544 512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 20:51:04.991557 512816 kubeadm.go:319]
I1227 20:51:04.991627 512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 20:51:04.991684 512816 kubeadm.go:403] duration metric: took 8m7.029842058s to StartCluster
I1227 20:51:04.991734 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 20:51:04.991795 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 20:51:05.027213 512816 cri.go:96] found id: ""
I1227 20:51:05.027263 512816 logs.go:282] 0 containers: []
W1227 20:51:05.027273 512816 logs.go:284] No container was found matching "kube-apiserver"
I1227 20:51:05.027283 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 20:51:05.027361 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 20:51:05.071930 512816 cri.go:96] found id: ""
I1227 20:51:05.071965 512816 logs.go:282] 0 containers: []
W1227 20:51:05.071975 512816 logs.go:284] No container was found matching "etcd"
I1227 20:51:05.071982 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 20:51:05.072053 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 20:51:05.107389 512816 cri.go:96] found id: ""
I1227 20:51:05.107457 512816 logs.go:282] 0 containers: []
W1227 20:51:05.107479 512816 logs.go:284] No container was found matching "coredns"
I1227 20:51:05.107501 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 20:51:05.107591 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 20:51:05.134987 512816 cri.go:96] found id: ""
I1227 20:51:05.135061 512816 logs.go:282] 0 containers: []
W1227 20:51:05.135085 512816 logs.go:284] No container was found matching "kube-scheduler"
I1227 20:51:05.135108 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 20:51:05.135234 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 20:51:05.165609 512816 cri.go:96] found id: ""
I1227 20:51:05.165637 512816 logs.go:282] 0 containers: []
W1227 20:51:05.165646 512816 logs.go:284] No container was found matching "kube-proxy"
I1227 20:51:05.165653 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 20:51:05.165737 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 20:51:05.194172 512816 cri.go:96] found id: ""
I1227 20:51:05.194198 512816 logs.go:282] 0 containers: []
W1227 20:51:05.194208 512816 logs.go:284] No container was found matching "kube-controller-manager"
I1227 20:51:05.194215 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 20:51:05.194319 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 20:51:05.222973 512816 cri.go:96] found id: ""
I1227 20:51:05.223048 512816 logs.go:282] 0 containers: []
W1227 20:51:05.223072 512816 logs.go:284] No container was found matching "kindnet"
I1227 20:51:05.223100 512816 logs.go:123] Gathering logs for kubelet ...
I1227 20:51:05.223136 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 20:51:05.281082 512816 logs.go:123] Gathering logs for dmesg ...
I1227 20:51:05.281117 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 20:51:05.296692 512816 logs.go:123] Gathering logs for describe nodes ...
I1227 20:51:05.296723 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 20:51:05.369377 512816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:51:05.360528 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.361234 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.362904 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.363554 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.365159 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 20:51:05.360528 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.361234 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.362904 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.363554 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.365159 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 20:51:05.369414 512816 logs.go:123] Gathering logs for containerd ...
I1227 20:51:05.369427 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 20:51:05.409653 512816 logs.go:123] Gathering logs for container status ...
I1227 20:51:05.409736 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1227 20:51:05.438490 512816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.438603 512816 out.go:285] *
*
W1227 20:51:05.438809 512816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.438888 512816 out.go:285] *
*
W1227 20:51:05.439283 512816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 20:51:05.446314 512816 out.go:203]
W1227 20:51:05.449296 512816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.449357 512816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 20:51:05.449379 512816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 20:51:05.452446 512816 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-875839 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 20:51:05.807551173 +0000 UTC m=+3343.326692542
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-875839
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-875839:
-- stdout --
[
{
"Id": "51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479",
"Created": "2025-12-27T20:42:49.481957215Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 513250,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-27T20:42:49.541236963Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
"ResolvConfPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/hostname",
"HostsPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/hosts",
"LogPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479-json.log",
"Name": "/force-systemd-flag-875839",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-875839:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-875839",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479",
"LowerDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2-init/diff:/var/lib/docker/overlay2/3aa037d6df727552c898397d6b697d27a219037ea9700eb1f4b4eaf57c46a788/diff",
"MergedDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/merged",
"UpperDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/diff",
"WorkDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "force-systemd-flag-875839",
"Source": "/var/lib/docker/volumes/force-systemd-flag-875839/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "force-systemd-flag-875839",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-875839",
"name.minikube.sigs.k8s.io": "force-systemd-flag-875839",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7ec8b28bb9195d136ae7929fe8ef067500c7b4146ac5cfa62d00f1b9143618ff",
"SandboxKey": "/var/run/docker/netns/7ec8b28bb919",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33416"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33417"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33420"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33418"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33419"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-875839": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "0a:1b:cb:db:66:18",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "c5ecbce927db28a7fe0fa1b2174604ca2b9dda404938126b7566e4272488dff0",
"EndpointID": "db8cb25113df553cef72dd34d92105a5feac3d21173f13b1d948f19796785c6e",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-875839",
"51a34498e61d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-875839 -n force-systemd-flag-875839
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-875839 -n force-systemd-flag-875839: exit status 6 (345.581968ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 20:51:06.163669 544004 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-875839" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-875839 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
│ addons │ enable metrics-server -p old-k8s-version-551586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
│ stop │ -p old-k8s-version-551586 --alsologtostderr -v=3 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
│ addons │ enable dashboard -p old-k8s-version-551586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
│ start │ -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:46 UTC │
│ image │ old-k8s-version-551586 image list --format=json │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
│ pause │ -p old-k8s-version-551586 --alsologtostderr -v=1 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
│ unpause │ -p old-k8s-version-551586 --alsologtostderr -v=1 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
│ delete │ -p old-k8s-version-551586 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
│ delete │ -p old-k8s-version-551586 │ old-k8s-version-551586 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
│ start │ -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:47 UTC │
│ addons │ enable metrics-server -p no-preload-259913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:47 UTC │ 27 Dec 25 20:47 UTC │
│ stop │ -p no-preload-259913 --alsologtostderr -v=3 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:47 UTC │ 27 Dec 25 20:48 UTC │
│ addons │ enable dashboard -p no-preload-259913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
│ start │ -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
│ image │ no-preload-259913 image list --format=json │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ pause │ -p no-preload-259913 --alsologtostderr -v=1 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ unpause │ -p no-preload-259913 --alsologtostderr -v=1 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ delete │ -p no-preload-259913 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ delete │ -p no-preload-259913 │ no-preload-259913 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ start │ -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-920276 │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
│ addons │ enable metrics-server -p embed-certs-920276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-920276 │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
│ stop │ -p embed-certs-920276 --alsologtostderr -v=3 │ embed-certs-920276 │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
│ addons │ enable dashboard -p embed-certs-920276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ embed-certs-920276 │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
│ start │ -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-920276 │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ │
│ ssh │ force-systemd-flag-875839 ssh cat /etc/containerd/config.toml │ force-systemd-flag-875839 │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
==> Last Start <==
Log file created at: 2025/12/27 20:50:16
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1227 20:50:16.009614 540878 out.go:360] Setting OutFile to fd 1 ...
I1227 20:50:16.009824 540878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:50:16.009873 540878 out.go:374] Setting ErrFile to fd 2...
I1227 20:50:16.009898 540878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:50:16.010215 540878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:50:16.010710 540878 out.go:368] Setting JSON to false
I1227 20:50:16.011723 540878 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9167,"bootTime":1766859449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1227 20:50:16.011832 540878 start.go:143] virtualization:
I1227 20:50:16.015258 540878 out.go:179] * [embed-certs-920276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1227 20:50:16.019266 540878 out.go:179] - MINIKUBE_LOCATION=22332
I1227 20:50:16.019350 540878 notify.go:221] Checking for updates...
I1227 20:50:16.025175 540878 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1227 20:50:16.028336 540878 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
I1227 20:50:16.031417 540878 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
I1227 20:50:16.034546 540878 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1227 20:50:16.037603 540878 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1227 20:50:16.041121 540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:50:16.041768 540878 driver.go:422] Setting default libvirt URI to qemu:///system
I1227 20:50:16.072984 540878 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1227 20:50:16.073117 540878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:50:16.134428 540878 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:50:16.124444602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 20:50:16.134543 540878 docker.go:319] overlay module found
I1227 20:50:16.137702 540878 out.go:179] * Using the docker driver based on existing profile
I1227 20:50:16.140505 540878 start.go:309] selected driver: docker
I1227 20:50:16.140530 540878 start.go:928] validating driver "docker" against &{Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:50:16.140652 540878 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1227 20:50:16.141429 540878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1227 20:50:16.204255 540878 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:50:16.19533537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1227 20:50:16.204604 540878 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 20:50:16.204633 540878 cni.go:84] Creating CNI manager for ""
I1227 20:50:16.204694 540878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 20:50:16.204733 540878 start.go:353] cluster config:
{Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:50:16.208096 540878 out.go:179] * Starting "embed-certs-920276" primary control-plane node in "embed-certs-920276" cluster
I1227 20:50:16.210965 540878 cache.go:134] Beginning downloading kic base image for docker with containerd
I1227 20:50:16.213876 540878 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
I1227 20:50:16.216711 540878 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 20:50:16.216761 540878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1227 20:50:16.216779 540878 cache.go:65] Caching tarball of preloaded images
I1227 20:50:16.216782 540878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
I1227 20:50:16.216868 540878 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1227 20:50:16.216879 540878 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1227 20:50:16.217000 540878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/config.json ...
I1227 20:50:16.236850 540878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
I1227 20:50:16.236873 540878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
I1227 20:50:16.236890 540878 cache.go:243] Successfully downloaded all kic artifacts
I1227 20:50:16.236922 540878 start.go:360] acquireMachinesLock for embed-certs-920276: {Name:mk59d29820c96aa85d20d8a3a5e4016f0bf5a9a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1227 20:50:16.236984 540878 start.go:364] duration metric: took 38.564µs to acquireMachinesLock for "embed-certs-920276"
I1227 20:50:16.237007 540878 start.go:96] Skipping create...Using existing machine configuration
I1227 20:50:16.237013 540878 fix.go:54] fixHost starting:
I1227 20:50:16.237287 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:16.254350 540878 fix.go:112] recreateIfNeeded on embed-certs-920276: state=Stopped err=<nil>
W1227 20:50:16.254383 540878 fix.go:138] unexpected machine state, will restart: <nil>
I1227 20:50:16.257664 540878 out.go:252] * Restarting existing docker container for "embed-certs-920276" ...
I1227 20:50:16.257766 540878 cli_runner.go:164] Run: docker start embed-certs-920276
I1227 20:50:16.519076 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:16.538059 540878 kic.go:430] container "embed-certs-920276" state is running.
I1227 20:50:16.538448 540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
I1227 20:50:16.563853 540878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/config.json ...
I1227 20:50:16.564098 540878 machine.go:94] provisionDockerMachine start ...
I1227 20:50:16.564169 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:16.583540 540878 main.go:144] libmachine: Using SSH client type: native
I1227 20:50:16.583868 540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33451 <nil> <nil>}
I1227 20:50:16.583878 540878 main.go:144] libmachine: About to run SSH command:
hostname
I1227 20:50:16.584757 540878 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59522->127.0.0.1:33451: read: connection reset by peer
I1227 20:50:19.722845 540878 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-920276
I1227 20:50:19.722874 540878 ubuntu.go:182] provisioning hostname "embed-certs-920276"
I1227 20:50:19.722958 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:19.740925 540878 main.go:144] libmachine: Using SSH client type: native
I1227 20:50:19.741253 540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33451 <nil> <nil>}
I1227 20:50:19.741271 540878 main.go:144] libmachine: About to run SSH command:
sudo hostname embed-certs-920276 && echo "embed-certs-920276" | sudo tee /etc/hostname
I1227 20:50:19.892459 540878 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-920276
I1227 20:50:19.892548 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:19.910533 540878 main.go:144] libmachine: Using SSH client type: native
I1227 20:50:19.910860 540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33451 <nil> <nil>}
I1227 20:50:19.910876 540878 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-920276' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920276/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-920276' | sudo tee -a /etc/hosts;
fi
fi
I1227 20:50:20.054893 540878 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1227 20:50:20.054918 540878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
I1227 20:50:20.054958 540878 ubuntu.go:190] setting up certificates
I1227 20:50:20.054967 540878 provision.go:84] configureAuth start
I1227 20:50:20.055026 540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
I1227 20:50:20.075740 540878 provision.go:143] copyHostCerts
I1227 20:50:20.075817 540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
I1227 20:50:20.075833 540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
I1227 20:50:20.075919 540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
I1227 20:50:20.076024 540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
I1227 20:50:20.076029 540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
I1227 20:50:20.076056 540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
I1227 20:50:20.076112 540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
I1227 20:50:20.076117 540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
I1227 20:50:20.076140 540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
I1227 20:50:20.076187 540878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920276 san=[127.0.0.1 192.168.85.2 embed-certs-920276 localhost minikube]
I1227 20:50:20.841497 540878 provision.go:177] copyRemoteCerts
I1227 20:50:20.841575 540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1227 20:50:20.841619 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:20.858882 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:20.959134 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1227 20:50:20.977501 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1227 20:50:20.997002 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1227 20:50:21.016593 540878 provision.go:87] duration metric: took 961.611509ms to configureAuth
I1227 20:50:21.016620 540878 ubuntu.go:206] setting minikube options for container-runtime
I1227 20:50:21.016821 540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:50:21.016829 540878 machine.go:97] duration metric: took 4.452715376s to provisionDockerMachine
I1227 20:50:21.016836 540878 start.go:293] postStartSetup for "embed-certs-920276" (driver="docker")
I1227 20:50:21.016846 540878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1227 20:50:21.016906 540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1227 20:50:21.016948 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:21.034110 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:21.131561 540878 ssh_runner.go:195] Run: cat /etc/os-release
I1227 20:50:21.135122 540878 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1227 20:50:21.135149 540878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1227 20:50:21.135161 540878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
I1227 20:50:21.135244 540878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
I1227 20:50:21.135319 540878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
I1227 20:50:21.135419 540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1227 20:50:21.143536 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
I1227 20:50:21.162629 540878 start.go:296] duration metric: took 145.776326ms for postStartSetup
I1227 20:50:21.162767 540878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1227 20:50:21.162811 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:21.180184 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:21.280723 540878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1227 20:50:21.285776 540878 fix.go:56] duration metric: took 5.048755717s for fixHost
I1227 20:50:21.285803 540878 start.go:83] releasing machines lock for "embed-certs-920276", held for 5.048807016s
I1227 20:50:21.285878 540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
I1227 20:50:21.302621 540878 ssh_runner.go:195] Run: cat /version.json
I1227 20:50:21.302676 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:21.302946 540878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1227 20:50:21.303011 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:21.324721 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:21.333253 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:21.541785 540878 ssh_runner.go:195] Run: systemctl --version
I1227 20:50:21.550188 540878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1227 20:50:21.555235 540878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1227 20:50:21.555307 540878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1227 20:50:21.564159 540878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1227 20:50:21.564187 540878 start.go:496] detecting cgroup driver to use...
I1227 20:50:21.564219 540878 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1227 20:50:21.564270 540878 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1227 20:50:21.582495 540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1227 20:50:21.598352 540878 docker.go:218] disabling cri-docker service (if available) ...
I1227 20:50:21.598440 540878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1227 20:50:21.614194 540878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1227 20:50:21.628000 540878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1227 20:50:21.737467 540878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1227 20:50:21.856279 540878 docker.go:234] disabling docker service ...
I1227 20:50:21.856433 540878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1227 20:50:21.872148 540878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1227 20:50:21.885634 540878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1227 20:50:22.009069 540878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1227 20:50:22.132051 540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1227 20:50:22.145487 540878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1227 20:50:22.159934 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1227 20:50:22.169330 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1227 20:50:22.179268 540878 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I1227 20:50:22.179388 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1227 20:50:22.188886 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:50:22.197995 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1227 20:50:22.206973 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1227 20:50:22.216466 540878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1227 20:50:22.224747 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1227 20:50:22.233914 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1227 20:50:22.243547 540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1227 20:50:22.253157 540878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1227 20:50:22.261154 540878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1227 20:50:22.269079 540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:50:22.417152 540878 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1227 20:50:22.573171 540878 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1227 20:50:22.573296 540878 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1227 20:50:22.577551 540878 start.go:574] Will wait 60s for crictl version
I1227 20:50:22.577629 540878 ssh_runner.go:195] Run: which crictl
I1227 20:50:22.581782 540878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1227 20:50:22.607740 540878 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1227 20:50:22.607819 540878 ssh_runner.go:195] Run: containerd --version
I1227 20:50:22.631731 540878 ssh_runner.go:195] Run: containerd --version
I1227 20:50:22.657731 540878 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1227 20:50:22.660751 540878 cli_runner.go:164] Run: docker network inspect embed-certs-920276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:50:22.677239 540878 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1227 20:50:22.681297 540878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:50:22.691421 540878 kubeadm.go:884] updating cluster {Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1227 20:50:22.691541 540878 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 20:50:22.691616 540878 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 20:50:22.721764 540878 containerd.go:635] all images are preloaded for containerd runtime.
I1227 20:50:22.721792 540878 containerd.go:542] Images already preloaded, skipping extraction
I1227 20:50:22.721868 540878 ssh_runner.go:195] Run: sudo crictl images --output json
I1227 20:50:22.747556 540878 containerd.go:635] all images are preloaded for containerd runtime.
I1227 20:50:22.747583 540878 cache_images.go:86] Images are preloaded, skipping loading
I1227 20:50:22.747591 540878 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I1227 20:50:22.747701 540878 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1227 20:50:22.747783 540878 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1227 20:50:22.773222 540878 cni.go:84] Creating CNI manager for ""
I1227 20:50:22.773248 540878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1227 20:50:22.773306 540878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1227 20:50:22.773339 540878 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920276 NodeName:embed-certs-920276 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1227 20:50:22.773474 540878 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-920276"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1227 20:50:22.773549 540878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1227 20:50:22.781463 540878 binaries.go:51] Found k8s binaries, skipping transfer
I1227 20:50:22.781534 540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1227 20:50:22.789363 540878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1227 20:50:22.802095 540878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1227 20:50:22.814688 540878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
I1227 20:50:22.827306 540878 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1227 20:50:22.831055 540878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1227 20:50:22.841449 540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:50:22.952233 540878 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 20:50:22.969630 540878 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276 for IP: 192.168.85.2
I1227 20:50:22.969653 540878 certs.go:195] generating shared ca certs ...
I1227 20:50:22.969668 540878 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:50:22.969841 540878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
I1227 20:50:22.969895 540878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
I1227 20:50:22.969908 540878 certs.go:257] generating profile certs ...
I1227 20:50:22.969996 540878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/client.key
I1227 20:50:22.970070 540878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.key.fca527cf
I1227 20:50:22.970115 540878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.key
I1227 20:50:22.970226 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
W1227 20:50:22.970263 540878 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
I1227 20:50:22.970275 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
I1227 20:50:22.970301 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
I1227 20:50:22.970329 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
I1227 20:50:22.970357 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
I1227 20:50:22.970412 540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
I1227 20:50:22.971022 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1227 20:50:22.992907 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1227 20:50:23.012614 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1227 20:50:23.039920 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1227 20:50:23.085951 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1227 20:50:23.108228 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1227 20:50:23.135883 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1227 20:50:23.165679 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1227 20:50:23.188520 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
I1227 20:50:23.215897 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
I1227 20:50:23.234234 540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1227 20:50:23.272269 540878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1227 20:50:23.285745 540878 ssh_runner.go:195] Run: openssl version
I1227 20:50:23.294229 540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
I1227 20:50:23.304072 540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
I1227 20:50:23.313157 540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
I1227 20:50:23.317340 540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
I1227 20:50:23.317423 540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
I1227 20:50:23.359282 540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1227 20:50:23.367427 540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1227 20:50:23.375513 540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1227 20:50:23.383433 540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1227 20:50:23.387312 540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
I1227 20:50:23.387383 540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1227 20:50:23.428769 540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1227 20:50:23.436784 540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
I1227 20:50:23.444881 540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
I1227 20:50:23.453132 540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
I1227 20:50:23.457190 540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
I1227 20:50:23.457275 540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
I1227 20:50:23.499008 540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1227 20:50:23.506926 540878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1227 20:50:23.510865 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1227 20:50:23.552378 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1227 20:50:23.600920 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1227 20:50:23.645107 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1227 20:50:23.687352 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1227 20:50:23.739989 540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1227 20:50:23.829044 540878 kubeadm.go:401] StartCluster: {Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1227 20:50:23.829200 540878 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1227 20:50:23.829312 540878 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1227 20:50:23.883973 540878 cri.go:96] found id: "2f173cac0f9685e08a95dd6f68a4ef15dc9852da4472a8d02f39aa3cc8109a83"
I1227 20:50:23.884058 540878 cri.go:96] found id: "97df2f4ff1d74c17de7edf548cb377ce6d8127c7abcc5115f1c261fbf453f2b7"
I1227 20:50:23.884079 540878 cri.go:96] found id: "96c0eef1af88cbdf7bad08bd45a3a95a244df655c59b89680d0574df7851aa36"
I1227 20:50:23.884099 540878 cri.go:96] found id: "f7b535b8d6864bb0b7f11f80357ef9d6b37ddd5b7bec49646da3f88fc8651894"
I1227 20:50:23.884130 540878 cri.go:96] found id: "bf5f50631af9fc470adedd8fe5c7c8eb2b4721b6f2e133c19cdb0545fa44131f"
I1227 20:50:23.884153 540878 cri.go:96] found id: "462eb09c51ab0f37fe7780e5ee4429fb5d2162825bcfc0c17411f23245ee849d"
I1227 20:50:23.884259 540878 cri.go:96] found id: "caf47dec1770cd41b912d94c45f19879a2fa0df92e005492498bc5827a53bebf"
I1227 20:50:23.884279 540878 cri.go:96] found id: "b4bf5714be87c6bce43295497a8d95936539734d668f791ac9545a602dc8f481"
I1227 20:50:23.884297 540878 cri.go:96] found id: ""
I1227 20:50:23.884386 540878 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1227 20:50:23.924357 540878 cri.go:123] JSON = [{"ociVersion":"1.2.1","id":"56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","pid":904,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5/rootfs","created":"2025-12-27T20:50:23.81196477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-920276_3df5a8a212961647d3066b74c35eb3ab","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3df5a8a212961647d3066b74c35eb3ab"},"owner":"root"},{"ociVersion":"1.2.1","id":"73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","pid":957,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed/rootfs","created":"2025-12-27T20:50:23.903685959Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","io.kubernetes.cri.sandbox-log-dire
ctory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-920276_7bb6d5e38042126465933f10ab5bbf65","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7bb6d5e38042126465933f10ab5bbf65"},"owner":"root"},{"ociVersion":"1.2.1","id":"c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","pid":918,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a/rootfs","created":"2025-12-27T20:50:23.808543271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.ku
bernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-920276_19ec925d2741946aa51ff0f936fea0eb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19ec925d2741946aa51ff0f936fea0eb"},"owner":"root"}]
I1227 20:50:23.924553 540878 cri.go:133] list returned 3 containers
I1227 20:50:23.924594 540878 cri.go:136] container: {ID:56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5 Status:created}
I1227 20:50:23.924629 540878 cri.go:138] skipping 56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5 - not in ps
I1227 20:50:23.924665 540878 cri.go:136] container: {ID:73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed Status:created}
I1227 20:50:23.924690 540878 cri.go:138] skipping 73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed - not in ps
I1227 20:50:23.924710 540878 cri.go:136] container: {ID:c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a Status:running}
I1227 20:50:23.924744 540878 cri.go:138] skipping c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a - not in ps
I1227 20:50:23.924842 540878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1227 20:50:23.937410 540878 kubeadm.go:417] found existing configuration files, will attempt cluster restart
I1227 20:50:23.937485 540878 kubeadm.go:598] restartPrimaryControlPlane start ...
I1227 20:50:23.937587 540878 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1227 20:50:23.946616 540878 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1227 20:50:23.947144 540878 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-920276" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig
I1227 20:50:23.947400 540878 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-300670/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-920276" cluster setting kubeconfig missing "embed-certs-920276" context setting]
I1227 20:50:23.947756 540878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/kubeconfig: {Name:mke76863c55a53bb5beeec750cba490366e88e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:50:23.949419 540878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1227 20:50:23.968578 540878 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
I1227 20:50:23.968663 540878 kubeadm.go:602] duration metric: took 31.15895ms to restartPrimaryControlPlane
I1227 20:50:23.968748 540878 kubeadm.go:403] duration metric: took 139.715352ms to StartCluster
I1227 20:50:23.968782 540878 settings.go:142] acquiring lock: {Name:mk48481ad33e4d60aedaf03b00ac874fd5c339d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:50:23.968871 540878 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22332-300670/kubeconfig
I1227 20:50:23.970006 540878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/kubeconfig: {Name:mke76863c55a53bb5beeec750cba490366e88e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1227 20:50:23.970341 540878 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1227 20:50:23.970844 540878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1227 20:50:23.970928 540878 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-920276"
I1227 20:50:23.970942 540878 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-920276"
W1227 20:50:23.970948 540878 addons.go:248] addon storage-provisioner should already be in state true
I1227 20:50:23.970972 540878 host.go:66] Checking if "embed-certs-920276" exists ...
I1227 20:50:23.971681 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:23.972006 540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:50:23.972096 540878 addons.go:70] Setting default-storageclass=true in profile "embed-certs-920276"
I1227 20:50:23.972154 540878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920276"
I1227 20:50:23.972490 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:23.978682 540878 addons.go:70] Setting dashboard=true in profile "embed-certs-920276"
I1227 20:50:23.978774 540878 addons.go:239] Setting addon dashboard=true in "embed-certs-920276"
W1227 20:50:23.978798 540878 addons.go:248] addon dashboard should already be in state true
I1227 20:50:23.978865 540878 host.go:66] Checking if "embed-certs-920276" exists ...
I1227 20:50:23.979499 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:23.979691 540878 addons.go:70] Setting metrics-server=true in profile "embed-certs-920276"
I1227 20:50:23.979728 540878 addons.go:239] Setting addon metrics-server=true in "embed-certs-920276"
W1227 20:50:23.979752 540878 addons.go:248] addon metrics-server should already be in state true
I1227 20:50:23.979808 540878 host.go:66] Checking if "embed-certs-920276" exists ...
I1227 20:50:23.980264 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:23.994054 540878 out.go:179] * Verifying Kubernetes components...
I1227 20:50:23.997367 540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1227 20:50:24.012702 540878 addons.go:239] Setting addon default-storageclass=true in "embed-certs-920276"
W1227 20:50:24.012738 540878 addons.go:248] addon default-storageclass should already be in state true
I1227 20:50:24.012763 540878 host.go:66] Checking if "embed-certs-920276" exists ...
I1227 20:50:24.013215 540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
I1227 20:50:24.049152 540878 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1227 20:50:24.052125 540878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1227 20:50:24.052149 540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1227 20:50:24.052226 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:24.067360 540878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1227 20:50:24.067387 540878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1227 20:50:24.067450 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:24.081498 540878 out.go:179] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1227 20:50:24.081638 540878 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1227 20:50:24.084448 540878 out.go:179] - Using image registry.k8s.io/echoserver:1.4
I1227 20:50:24.084520 540878 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1227 20:50:24.084532 540878 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1227 20:50:24.084594 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:24.090221 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1227 20:50:24.090271 540878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1227 20:50:24.090392 540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
I1227 20:50:24.118091 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:24.146640 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:24.153146 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:24.164331 540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
I1227 20:50:24.301755 540878 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1227 20:50:24.362291 540878 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920276" to be "Ready" ...
I1227 20:50:24.432110 540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1227 20:50:24.474944 540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1227 20:50:24.475020 540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1227 20:50:24.574982 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1227 20:50:24.575061 540878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1227 20:50:24.611847 540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1227 20:50:24.611926 540878 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1227 20:50:24.653894 540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1227 20:50:24.653976 540878 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1227 20:50:24.680931 540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1227 20:50:24.817157 540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1227 20:50:24.829273 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1227 20:50:24.829359 540878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1227 20:50:24.944579 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1227 20:50:24.944661 540878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1227 20:50:25.031754 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1227 20:50:25.031844 540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1227 20:50:25.167665 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1227 20:50:25.167762 540878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1227 20:50:25.337071 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1227 20:50:25.337154 540878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1227 20:50:25.495644 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1227 20:50:25.495725 540878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1227 20:50:25.545877 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1227 20:50:25.545900 540878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1227 20:50:25.584252 540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1227 20:50:25.584277 540878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1227 20:50:25.614496 540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1227 20:50:27.875637 540878 node_ready.go:49] node "embed-certs-920276" is "Ready"
I1227 20:50:27.875679 540878 node_ready.go:38] duration metric: took 3.513294685s for node "embed-certs-920276" to be "Ready" ...
I1227 20:50:27.875698 540878 api_server.go:52] waiting for apiserver process to appear ...
I1227 20:50:27.875758 540878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1227 20:50:28.193223 540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.761025119s)
I1227 20:50:30.740883 540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.059868575s)
I1227 20:50:30.804397 540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.98715762s)
I1227 20:50:30.804434 540878 addons.go:495] Verifying addon metrics-server=true in "embed-certs-920276"
I1227 20:50:30.804544 540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.190021364s)
I1227 20:50:30.804696 540878 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.928927357s)
I1227 20:50:30.804718 540878 api_server.go:72] duration metric: took 6.834310177s to wait for apiserver process to appear ...
I1227 20:50:30.804725 540878 api_server.go:88] waiting for apiserver healthz status ...
I1227 20:50:30.804744 540878 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1227 20:50:30.807568 540878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-920276 addons enable metrics-server
I1227 20:50:30.810693 540878 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I1227 20:50:30.813254 540878 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
ok
I1227 20:50:30.814246 540878 api_server.go:141] control plane version: v1.35.0
I1227 20:50:30.814273 540878 api_server.go:131] duration metric: took 9.541191ms to wait for apiserver health ...
I1227 20:50:30.814284 540878 system_pods.go:43] waiting for kube-system pods to appear ...
I1227 20:50:30.814548 540878 addons.go:530] duration metric: took 6.843706785s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I1227 20:50:30.817883 540878 system_pods.go:59] 9 kube-system pods found
I1227 20:50:30.817928 540878 system_pods.go:61] "coredns-7d764666f9-fsvn9" [db2bf94a-5c69-4a44-b6cc-d70fcb4b7df8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 20:50:30.817937 540878 system_pods.go:61] "etcd-embed-certs-920276" [6c5c45b2-36fb-4a5d-be57-37fbf3d73d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 20:50:30.817944 540878 system_pods.go:61] "kindnet-nhb2c" [823229c3-d885-4f86-a40a-1c7d2e155396] Running
I1227 20:50:30.817951 540878 system_pods.go:61] "kube-apiserver-embed-certs-920276" [d60204e3-a1c1-4e78-a489-97ca9d0e3b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1227 20:50:30.817957 540878 system_pods.go:61] "kube-controller-manager-embed-certs-920276" [625d25e3-c1e7-44af-a45e-d95244ada624] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 20:50:30.817968 540878 system_pods.go:61] "kube-proxy-shcp6" [e4fe1ebf-141f-4b36-9612-ae8f13f002b8] Running
I1227 20:50:30.817975 540878 system_pods.go:61] "kube-scheduler-embed-certs-920276" [f2fff7ad-0808-4a82-92a9-be7f96fa5383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1227 20:50:30.817986 540878 system_pods.go:61] "metrics-server-5d785b57d4-qjjgk" [0a5f5853-ba0f-4a69-aea1-b88e86d0d92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1227 20:50:30.817999 540878 system_pods.go:61] "storage-provisioner" [cbb2eec5-c485-4b1f-ad76-bf4511e17a05] Running
I1227 20:50:30.818006 540878 system_pods.go:74] duration metric: took 3.716295ms to wait for pod list to return data ...
I1227 20:50:30.818017 540878 default_sa.go:34] waiting for default service account to be created ...
I1227 20:50:30.820965 540878 default_sa.go:45] found service account: "default"
I1227 20:50:30.820992 540878 default_sa.go:55] duration metric: took 2.967497ms for default service account to be created ...
I1227 20:50:30.821004 540878 system_pods.go:116] waiting for k8s-apps to be running ...
I1227 20:50:30.824451 540878 system_pods.go:86] 9 kube-system pods found
I1227 20:50:30.824486 540878 system_pods.go:89] "coredns-7d764666f9-fsvn9" [db2bf94a-5c69-4a44-b6cc-d70fcb4b7df8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1227 20:50:30.824495 540878 system_pods.go:89] "etcd-embed-certs-920276" [6c5c45b2-36fb-4a5d-be57-37fbf3d73d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1227 20:50:30.824502 540878 system_pods.go:89] "kindnet-nhb2c" [823229c3-d885-4f86-a40a-1c7d2e155396] Running
I1227 20:50:30.824509 540878 system_pods.go:89] "kube-apiserver-embed-certs-920276" [d60204e3-a1c1-4e78-a489-97ca9d0e3b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1227 20:50:30.824516 540878 system_pods.go:89] "kube-controller-manager-embed-certs-920276" [625d25e3-c1e7-44af-a45e-d95244ada624] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1227 20:50:30.824521 540878 system_pods.go:89] "kube-proxy-shcp6" [e4fe1ebf-141f-4b36-9612-ae8f13f002b8] Running
I1227 20:50:30.824528 540878 system_pods.go:89] "kube-scheduler-embed-certs-920276" [f2fff7ad-0808-4a82-92a9-be7f96fa5383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1227 20:50:30.824540 540878 system_pods.go:89] "metrics-server-5d785b57d4-qjjgk" [0a5f5853-ba0f-4a69-aea1-b88e86d0d92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1227 20:50:30.824553 540878 system_pods.go:89] "storage-provisioner" [cbb2eec5-c485-4b1f-ad76-bf4511e17a05] Running
I1227 20:50:30.824561 540878 system_pods.go:126] duration metric: took 3.551708ms to wait for k8s-apps to be running ...
I1227 20:50:30.824572 540878 system_svc.go:44] waiting for kubelet service to be running ....
I1227 20:50:30.824629 540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1227 20:50:30.838985 540878 system_svc.go:56] duration metric: took 14.403352ms WaitForService to wait for kubelet
I1227 20:50:30.839017 540878 kubeadm.go:587] duration metric: took 6.868606767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1227 20:50:30.839039 540878 node_conditions.go:102] verifying NodePressure condition ...
I1227 20:50:30.842050 540878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1227 20:50:30.842143 540878 node_conditions.go:123] node cpu capacity is 2
I1227 20:50:30.842164 540878 node_conditions.go:105] duration metric: took 3.12004ms to run NodePressure ...
I1227 20:50:30.842178 540878 start.go:242] waiting for startup goroutines ...
I1227 20:50:30.842186 540878 start.go:247] waiting for cluster config update ...
I1227 20:50:30.842212 540878 start.go:256] writing updated cluster config ...
I1227 20:50:30.842545 540878 ssh_runner.go:195] Run: rm -f paused
I1227 20:50:30.847321 540878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1227 20:50:30.850939 540878 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fsvn9" in "kube-system" namespace to be "Ready" or be gone ...
W1227 20:50:32.857142 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:34.862026 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:37.356522 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:39.356733 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:41.357208 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:43.857335 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:46.357025 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:48.856979 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:51.356140 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:53.356678 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:55.856464 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:50:58.356596 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:51:00.358635 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
I1227 20:51:04.984612 512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000227461s
I1227 20:51:04.984646 512816 kubeadm.go:319]
I1227 20:51:04.984705 512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1227 20:51:04.984745 512816 kubeadm.go:319] - The kubelet is not running
I1227 20:51:04.984855 512816 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1227 20:51:04.984862 512816 kubeadm.go:319]
I1227 20:51:04.984968 512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1227 20:51:04.985005 512816 kubeadm.go:319] - 'systemctl status kubelet'
I1227 20:51:04.985041 512816 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1227 20:51:04.985048 512816 kubeadm.go:319]
I1227 20:51:04.990635 512816 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1227 20:51:04.991126 512816 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1227 20:51:04.991276 512816 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1227 20:51:04.991544 512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1227 20:51:04.991557 512816 kubeadm.go:319]
I1227 20:51:04.991627 512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1227 20:51:04.991684 512816 kubeadm.go:403] duration metric: took 8m7.029842058s to StartCluster
I1227 20:51:04.991734 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1227 20:51:04.991795 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1227 20:51:05.027213 512816 cri.go:96] found id: ""
I1227 20:51:05.027263 512816 logs.go:282] 0 containers: []
W1227 20:51:05.027273 512816 logs.go:284] No container was found matching "kube-apiserver"
I1227 20:51:05.027283 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1227 20:51:05.027361 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1227 20:51:05.071930 512816 cri.go:96] found id: ""
I1227 20:51:05.071965 512816 logs.go:282] 0 containers: []
W1227 20:51:05.071975 512816 logs.go:284] No container was found matching "etcd"
I1227 20:51:05.071982 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1227 20:51:05.072053 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1227 20:51:05.107389 512816 cri.go:96] found id: ""
I1227 20:51:05.107457 512816 logs.go:282] 0 containers: []
W1227 20:51:05.107479 512816 logs.go:284] No container was found matching "coredns"
I1227 20:51:05.107501 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1227 20:51:05.107591 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1227 20:51:05.134987 512816 cri.go:96] found id: ""
I1227 20:51:05.135061 512816 logs.go:282] 0 containers: []
W1227 20:51:05.135085 512816 logs.go:284] No container was found matching "kube-scheduler"
I1227 20:51:05.135108 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1227 20:51:05.135234 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1227 20:51:05.165609 512816 cri.go:96] found id: ""
I1227 20:51:05.165637 512816 logs.go:282] 0 containers: []
W1227 20:51:05.165646 512816 logs.go:284] No container was found matching "kube-proxy"
I1227 20:51:05.165653 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1227 20:51:05.165737 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1227 20:51:05.194172 512816 cri.go:96] found id: ""
I1227 20:51:05.194198 512816 logs.go:282] 0 containers: []
W1227 20:51:05.194208 512816 logs.go:284] No container was found matching "kube-controller-manager"
I1227 20:51:05.194215 512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1227 20:51:05.194319 512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1227 20:51:05.222973 512816 cri.go:96] found id: ""
I1227 20:51:05.223048 512816 logs.go:282] 0 containers: []
W1227 20:51:05.223072 512816 logs.go:284] No container was found matching "kindnet"
I1227 20:51:05.223100 512816 logs.go:123] Gathering logs for kubelet ...
I1227 20:51:05.223136 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1227 20:51:05.281082 512816 logs.go:123] Gathering logs for dmesg ...
I1227 20:51:05.281117 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1227 20:51:05.296692 512816 logs.go:123] Gathering logs for describe nodes ...
I1227 20:51:05.296723 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1227 20:51:05.369377 512816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:51:05.360528 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.361234 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.362904 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.363554 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.365159 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1227 20:51:05.360528 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.361234 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.362904 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.363554 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:05.365159 4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1227 20:51:05.369414 512816 logs.go:123] Gathering logs for containerd ...
I1227 20:51:05.369427 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1227 20:51:05.409653 512816 logs.go:123] Gathering logs for container status ...
I1227 20:51:05.409736 512816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1227 20:51:05.438490 512816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.438603 512816 out.go:285] *
W1227 20:51:05.438809 512816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.438888 512816 out.go:285] *
W1227 20:51:05.439283 512816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1227 20:51:05.446314 512816 out.go:203]
W1227 20:51:05.449296 512816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000227461s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1227 20:51:05.449357 512816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1227 20:51:05.449379 512816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1227 20:51:05.452446 512816 out.go:203]
W1227 20:51:02.857475 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
W1227 20:51:04.858731 540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.864969739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865041789Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865147160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865231206Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865299276Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865363252Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865430608Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865491655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865557896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865646208Z" level=info msg="Connect containerd service"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.866021768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.867416857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884643720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884710961Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884741731Z" level=info msg="Start subscribing containerd event"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884804829Z" level=info msg="Start recovering state"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922408981Z" level=info msg="Start event monitor"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922614997Z" level=info msg="Start cni network conf syncer for default"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922685012Z" level=info msg="Start streaming server"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922745500Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922808648Z" level=info msg="runtime interface starting up..."
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922863614Z" level=info msg="starting plugins..."
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922948579Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 27 20:42:55 force-systemd-flag-875839 systemd[1]: Started containerd.service - containerd container runtime.
Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.926796686Z" level=info msg="containerd successfully booted in 0.086871s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1227 20:51:06.876511 4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:06.877351 4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:06.879380 4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:06.880066 4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1227 20:51:06.881873 4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
20:51:06 up 2:33, 0 user, load average: 0.79, 1.31, 1.72
Linux force-systemd-flag-875839 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 27 20:51:03 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:04 force-systemd-flag-875839 kubelet[4737]: E1227 20:51:04.330770 4737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:05 force-systemd-flag-875839 kubelet[4751]: E1227 20:51:05.102901 4751 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:05 force-systemd-flag-875839 kubelet[4835]: E1227 20:51:05.867822 4835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 27 20:51:06 force-systemd-flag-875839 kubelet[4874]: E1227 20:51:06.617424 4874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-875839 -n force-systemd-flag-875839
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-875839 -n force-systemd-flag-875839: exit status 6 (350.487906ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1227 20:51:07.347973 544224 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-875839" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-875839" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-875839" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-875839
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-875839: (2.01079817s)
--- FAIL: TestForceSystemdFlag (504.97s)