=== RUN TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag
=== CONT TestForceSystemdFlag
docker_test.go:91: (dbg) Run: out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd
E1229 07:32:59.925165 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/functional-421974/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:33:30.655767 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/addons-679786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: exit status 109 (8m20.690144063s)
-- stdout --
* [force-systemd-flag-275936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22353
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "force-systemd-flag-275936" primary control-plane node in "force-systemd-flag-275936" cluster
* Pulling base image v0.0.48-1766979815-22353 ...
* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
-- /stdout --
** stderr **
I1229 07:31:54.990118 210456 out.go:360] Setting OutFile to fd 1 ...
I1229 07:31:54.990303 210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:31:54.990335 210456 out.go:374] Setting ErrFile to fd 2...
I1229 07:31:54.990355 210456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:31:54.990732 210456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 07:31:54.991290 210456 out.go:368] Setting JSON to false
I1229 07:31:54.992670 210456 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4466,"bootTime":1766989049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1229 07:31:54.992775 210456 start.go:143] virtualization:
I1229 07:31:54.999193 210456 out.go:179] * [force-systemd-flag-275936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1229 07:31:55.014374 210456 out.go:179] - MINIKUBE_LOCATION=22353
I1229 07:31:55.014524 210456 notify.go:221] Checking for updates...
I1229 07:31:55.021978 210456 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1229 07:31:55.025445 210456 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
I1229 07:31:55.028900 210456 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
I1229 07:31:55.032526 210456 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1229 07:31:55.035779 210456 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1229 07:31:55.039716 210456 config.go:182] Loaded profile config "force-systemd-env-765623": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 07:31:55.039858 210456 driver.go:422] Setting default libvirt URI to qemu:///system
I1229 07:31:55.062289 210456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1229 07:31:55.062411 210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:31:55.126864 210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.117265138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:31:55.126971 210456 docker.go:319] overlay module found
I1229 07:31:55.130288 210456 out.go:179] * Using the docker driver based on user configuration
I1229 07:31:55.133429 210456 start.go:309] selected driver: docker
I1229 07:31:55.133455 210456 start.go:928] validating driver "docker" against <nil>
I1229 07:31:55.133470 210456 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1229 07:31:55.134222 210456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:31:55.189237 210456 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:31:55.17992811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:31:55.189389 210456 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1229 07:31:55.189601 210456 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
I1229 07:31:55.192735 210456 out.go:179] * Using Docker driver with root privileges
I1229 07:31:55.195689 210456 cni.go:84] Creating CNI manager for ""
I1229 07:31:55.195764 210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1229 07:31:55.195784 210456 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1229 07:31:55.195864 210456 start.go:353] cluster config:
{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:31:55.199033 210456 out.go:179] * Starting "force-systemd-flag-275936" primary control-plane node in "force-systemd-flag-275936" cluster
I1229 07:31:55.201990 210456 cache.go:134] Beginning downloading kic base image for docker with containerd
I1229 07:31:55.205087 210456 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
I1229 07:31:55.208135 210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 07:31:55.208186 210456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
I1229 07:31:55.208196 210456 cache.go:65] Caching tarball of preloaded images
I1229 07:31:55.208228 210456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
I1229 07:31:55.208280 210456 preload.go:251] Found /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1229 07:31:55.208290 210456 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1229 07:31:55.208394 210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
I1229 07:31:55.208411 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json: {Name:mkce2701c5739928b2701138ece40a77f13e0afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:31:55.235557 210456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
I1229 07:31:55.235583 210456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
I1229 07:31:55.235603 210456 cache.go:243] Successfully downloaded all kic artifacts
I1229 07:31:55.235641 210456 start.go:360] acquireMachinesLock for force-systemd-flag-275936: {Name:mkc1ff8fd971687527ddb66e30c065b7dec5d125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:31:55.235763 210456 start.go:364] duration metric: took 102.705µs to acquireMachinesLock for "force-systemd-flag-275936"
I1229 07:31:55.235792 210456 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1229 07:31:55.235867 210456 start.go:125] createHost starting for "" (driver="docker")
I1229 07:31:55.239336 210456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1229 07:31:55.239605 210456 start.go:159] libmachine.API.Create for "force-systemd-flag-275936" (driver="docker")
I1229 07:31:55.239645 210456 client.go:173] LocalClient.Create starting
I1229 07:31:55.239732 210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
I1229 07:31:55.239774 210456 main.go:144] libmachine: Decoding PEM data...
I1229 07:31:55.239790 210456 main.go:144] libmachine: Parsing certificate...
I1229 07:31:55.239844 210456 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
I1229 07:31:55.239866 210456 main.go:144] libmachine: Decoding PEM data...
I1229 07:31:55.239877 210456 main.go:144] libmachine: Parsing certificate...
I1229 07:31:55.240246 210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:31:55.259118 210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:31:55.259228 210456 network_create.go:284] running [docker network inspect force-systemd-flag-275936] to gather additional debugging logs...
I1229 07:31:55.259249 210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936
W1229 07:31:55.275676 210456 cli_runner.go:211] docker network inspect force-systemd-flag-275936 returned with exit code 1
I1229 07:31:55.275729 210456 network_create.go:287] error running [docker network inspect force-systemd-flag-275936]: docker network inspect force-systemd-flag-275936: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-flag-275936 not found
I1229 07:31:55.275743 210456 network_create.go:289] output of [docker network inspect force-systemd-flag-275936]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-flag-275936 not found
** /stderr **
I1229 07:31:55.275852 210456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:31:55.295712 210456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
I1229 07:31:55.296163 210456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
I1229 07:31:55.296569 210456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
I1229 07:31:55.297004 210456 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c78f904b7647 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:23:10:63:16:dd} reservation:<nil>}
I1229 07:31:55.297525 210456 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4b020}
I1229 07:31:55.297549 210456 network_create.go:124] attempt to create docker network force-systemd-flag-275936 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1229 07:31:55.297626 210456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-275936 force-systemd-flag-275936
I1229 07:31:55.356469 210456 network_create.go:108] docker network force-systemd-flag-275936 192.168.85.0/24 created
I1229 07:31:55.356503 210456 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-275936" container
I1229 07:31:55.356596 210456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1229 07:31:55.372634 210456 cli_runner.go:164] Run: docker volume create force-systemd-flag-275936 --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true
I1229 07:31:55.390334 210456 oci.go:103] Successfully created a docker volume force-systemd-flag-275936
I1229 07:31:55.390428 210456 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-275936-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --entrypoint /usr/bin/test -v force-systemd-flag-275936:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
I1229 07:31:55.963123 210456 oci.go:107] Successfully prepared a docker volume force-systemd-flag-275936
I1229 07:31:55.963188 210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 07:31:55.963199 210456 kic.go:194] Starting extracting preloaded images to volume ...
I1229 07:31:55.963282 210456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
I1229 07:31:59.824384 210456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-2531/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.86104634s)
I1229 07:31:59.824418 210456 kic.go:203] duration metric: took 3.861215926s to extract preloaded images to volume ...
W1229 07:31:59.824564 210456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1229 07:31:59.824685 210456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1229 07:31:59.876072 210456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-275936 --name force-systemd-flag-275936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-275936 --network force-systemd-flag-275936 --ip 192.168.85.2 --volume force-systemd-flag-275936:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
I1229 07:32:00.556829 210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Running}}
I1229 07:32:00.579290 210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
I1229 07:32:00.610624 210456 cli_runner.go:164] Run: docker exec force-systemd-flag-275936 stat /var/lib/dpkg/alternatives/iptables
I1229 07:32:00.666102 210456 oci.go:144] the created container "force-systemd-flag-275936" has a running status.
I1229 07:32:00.666144 210456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa...
I1229 07:32:00.928093 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1229 07:32:00.928158 210456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1229 07:32:00.955575 210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
I1229 07:32:00.978812 210456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1229 07:32:00.978832 210456 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-275936 chown docker:docker /home/docker/.ssh/authorized_keys]
I1229 07:32:01.046827 210456 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275936 --format={{.State.Status}}
I1229 07:32:01.063890 210456 machine.go:94] provisionDockerMachine start ...
I1229 07:32:01.063978 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:01.083021 210456 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:01.083355 210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1229 07:32:01.083364 210456 main.go:144] libmachine: About to run SSH command:
hostname
I1229 07:32:01.084071 210456 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1229 07:32:04.237095 210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
I1229 07:32:04.237134 210456 ubuntu.go:182] provisioning hostname "force-systemd-flag-275936"
I1229 07:32:04.237227 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:04.256216 210456 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:04.256528 210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1229 07:32:04.256544 210456 main.go:144] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-275936 && echo "force-systemd-flag-275936" | sudo tee /etc/hostname
I1229 07:32:04.418929 210456 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275936
I1229 07:32:04.419007 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:04.446717 210456 main.go:144] libmachine: Using SSH client type: native
I1229 07:32:04.447036 210456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33043 <nil> <nil>}
I1229 07:32:04.447059 210456 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-275936' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-275936/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-275936' | sudo tee -a /etc/hosts;
fi
fi
I1229 07:32:04.609426 210456 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1229 07:32:04.609457 210456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
I1229 07:32:04.609486 210456 ubuntu.go:190] setting up certificates
I1229 07:32:04.609501 210456 provision.go:84] configureAuth start
I1229 07:32:04.609566 210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
I1229 07:32:04.626388 210456 provision.go:143] copyHostCerts
I1229 07:32:04.626430 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
I1229 07:32:04.626466 210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
I1229 07:32:04.626484 210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
I1229 07:32:04.626565 210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
I1229 07:32:04.626654 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
I1229 07:32:04.626677 210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
I1229 07:32:04.626681 210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
I1229 07:32:04.626716 210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
I1229 07:32:04.626772 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
I1229 07:32:04.626794 210456 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
I1229 07:32:04.626799 210456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
I1229 07:32:04.626833 210456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
I1229 07:32:04.626893 210456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-275936 san=[127.0.0.1 192.168.85.2 force-systemd-flag-275936 localhost minikube]
I1229 07:32:05.170037 210456 provision.go:177] copyRemoteCerts
I1229 07:32:05.170107 210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1229 07:32:05.170157 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:05.198376 210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
I1229 07:32:05.304972 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1229 07:32:05.305054 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1229 07:32:05.323515 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem -> /etc/docker/server.pem
I1229 07:32:05.323579 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1229 07:32:05.342427 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1229 07:32:05.342499 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1229 07:32:05.360800 210456 provision.go:87] duration metric: took 751.283522ms to configureAuth
I1229 07:32:05.360827 210456 ubuntu.go:206] setting minikube options for container-runtime
I1229 07:32:05.361018 210456 config.go:182] Loaded profile config "force-systemd-flag-275936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 07:32:05.361047 210456 machine.go:97] duration metric: took 4.297134989s to provisionDockerMachine
I1229 07:32:05.361055 210456 client.go:176] duration metric: took 10.12140189s to LocalClient.Create
I1229 07:32:05.361075 210456 start.go:167] duration metric: took 10.121472807s to libmachine.API.Create "force-systemd-flag-275936"
I1229 07:32:05.361083 210456 start.go:293] postStartSetup for "force-systemd-flag-275936" (driver="docker")
I1229 07:32:05.361091 210456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1229 07:32:05.361147 210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1229 07:32:05.361185 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:05.380875 210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
I1229 07:32:05.485408 210456 ssh_runner.go:195] Run: cat /etc/os-release
I1229 07:32:05.489100 210456 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1229 07:32:05.489170 210456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1229 07:32:05.489195 210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
I1229 07:32:05.489255 210456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
I1229 07:32:05.489343 210456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
I1229 07:32:05.489355 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /etc/ssl/certs/43522.pem
I1229 07:32:05.489461 210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1229 07:32:05.497396 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
I1229 07:32:05.515749 210456 start.go:296] duration metric: took 154.652975ms for postStartSetup
I1229 07:32:05.516127 210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
I1229 07:32:05.533819 210456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/config.json ...
I1229 07:32:05.534100 210456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1229 07:32:05.534159 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:05.551565 210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
I1229 07:32:05.654403 210456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1229 07:32:05.659341 210456 start.go:128] duration metric: took 10.423458394s to createHost
I1229 07:32:05.659375 210456 start.go:83] releasing machines lock for "force-systemd-flag-275936", held for 10.423592738s
I1229 07:32:05.659448 210456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275936
I1229 07:32:05.678420 210456 ssh_runner.go:195] Run: cat /version.json
I1229 07:32:05.678492 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:05.678576 210456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1229 07:32:05.678642 210456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275936
I1229 07:32:05.697110 210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
I1229 07:32:05.710766 210456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/force-systemd-flag-275936/id_rsa Username:docker}
I1229 07:32:05.800849 210456 ssh_runner.go:195] Run: systemctl --version
I1229 07:32:05.906106 210456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1229 07:32:05.913794 210456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1229 07:32:05.913886 210456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1229 07:32:05.943408 210456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1229 07:32:05.943429 210456 start.go:496] detecting cgroup driver to use...
I1229 07:32:05.943443 210456 start.go:500] using "systemd" cgroup driver as enforced via flags
I1229 07:32:05.943498 210456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1229 07:32:05.960297 210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1229 07:32:05.975696 210456 docker.go:218] disabling cri-docker service (if available) ...
I1229 07:32:05.975754 210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1229 07:32:05.997010 210456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1229 07:32:06.022997 210456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1229 07:32:06.148117 210456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1229 07:32:06.280642 210456 docker.go:234] disabling docker service ...
I1229 07:32:06.280756 210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1229 07:32:06.304036 210456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1229 07:32:06.318700 210456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1229 07:32:06.443465 210456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1229 07:32:06.572584 210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1229 07:32:06.586444 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1229 07:32:06.602103 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1229 07:32:06.611453 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1229 07:32:06.620606 210456 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1229 07:32:06.620725 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1229 07:32:06.630240 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:32:06.639541 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1229 07:32:06.649286 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:32:06.658362 210456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1229 07:32:06.667478 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1229 07:32:06.677469 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1229 07:32:06.687174 210456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1229 07:32:06.696948 210456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1229 07:32:06.705434 210456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1229 07:32:06.713593 210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:32:06.830071 210456 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1229 07:32:06.972284 210456 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1229 07:32:06.972372 210456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1229 07:32:06.976442 210456 start.go:574] Will wait 60s for crictl version
I1229 07:32:06.976556 210456 ssh_runner.go:195] Run: which crictl
I1229 07:32:06.980543 210456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1229 07:32:07.009695 210456 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1229 07:32:07.009824 210456 ssh_runner.go:195] Run: containerd --version
I1229 07:32:07.032066 210456 ssh_runner.go:195] Run: containerd --version
I1229 07:32:07.059211 210456 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1229 07:32:07.062242 210456 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:32:07.079092 210456 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1229 07:32:07.083157 210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:32:07.093628 210456 kubeadm.go:884] updating cluster {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1229 07:32:07.093752 210456 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 07:32:07.093832 210456 ssh_runner.go:195] Run: sudo crictl images --output json
I1229 07:32:07.119407 210456 containerd.go:635] all images are preloaded for containerd runtime.
I1229 07:32:07.119431 210456 containerd.go:542] Images already preloaded, skipping extraction
I1229 07:32:07.119497 210456 ssh_runner.go:195] Run: sudo crictl images --output json
I1229 07:32:07.144660 210456 containerd.go:635] all images are preloaded for containerd runtime.
I1229 07:32:07.144737 210456 cache_images.go:86] Images are preloaded, skipping loading
I1229 07:32:07.144759 210456 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
I1229 07:32:07.144898 210456 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-275936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1229 07:32:07.144994 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1229 07:32:07.174108 210456 cni.go:84] Creating CNI manager for ""
I1229 07:32:07.174131 210456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1229 07:32:07.174152 210456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1229 07:32:07.174176 210456 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-275936 NodeName:force-systemd-flag-275936 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1229 07:32:07.174301 210456 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "force-systemd-flag-275936"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1229 07:32:07.174374 210456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1229 07:32:07.182508 210456 binaries.go:51] Found k8s binaries, skipping transfer
I1229 07:32:07.182591 210456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1229 07:32:07.190487 210456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1229 07:32:07.203868 210456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1229 07:32:07.217157 210456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1229 07:32:07.229905 210456 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1229 07:32:07.233686 210456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:32:07.243649 210456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:32:07.352826 210456 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 07:32:07.369694 210456 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936 for IP: 192.168.85.2
I1229 07:32:07.369715 210456 certs.go:195] generating shared ca certs ...
I1229 07:32:07.369731 210456 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.369899 210456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
I1229 07:32:07.369954 210456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
I1229 07:32:07.369966 210456 certs.go:257] generating profile certs ...
I1229 07:32:07.370034 210456 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key
I1229 07:32:07.370051 210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt with IP's: []
I1229 07:32:07.651508 210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt ...
I1229 07:32:07.651543 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.crt: {Name:mkc96444933691c9c7712e10522774b7837acc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.651739 210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key ...
I1229 07:32:07.651754 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/client.key: {Name:mk42aa340448fdd8ef54b06b419e1bc9521849ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.651848 210456 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f
I1229 07:32:07.651868 210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1229 07:32:07.848324 210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f ...
I1229 07:32:07.848363 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f: {Name:mk68515dedc39c6aa92cea4b93fb1d928671a1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.848540 210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f ...
I1229 07:32:07.848554 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f: {Name:mk54be500c1ee65f80b3e1b34359ca9c53176eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.848636 210456 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt
I1229 07:32:07.848714 210456 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key.8125716f -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key
I1229 07:32:07.848778 210456 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key
I1229 07:32:07.848799 210456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt with IP's: []
I1229 07:32:07.938444 210456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt ...
I1229 07:32:07.938479 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt: {Name:mkdba6db08e3be7cf95db626fb2a49fc799397bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.938677 210456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key ...
I1229 07:32:07.938695 210456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key: {Name:mk5ae15bf1e8cecd3236539da010f90c7a6ecc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:32:07.938805 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1229 07:32:07.938827 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1229 07:32:07.938845 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1229 07:32:07.938871 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1229 07:32:07.938889 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1229 07:32:07.938912 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1229 07:32:07.938935 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1229 07:32:07.938954 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1229 07:32:07.939035 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
W1229 07:32:07.939082 210456 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
I1229 07:32:07.939096 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
I1229 07:32:07.939131 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
I1229 07:32:07.939161 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
I1229 07:32:07.939189 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
I1229 07:32:07.939241 210456 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
I1229 07:32:07.939275 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> /usr/share/ca-certificates/43522.pem
I1229 07:32:07.939292 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1229 07:32:07.939303 210456 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem -> /usr/share/ca-certificates/4352.pem
I1229 07:32:07.939851 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1229 07:32:07.959114 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1229 07:32:07.979433 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1229 07:32:07.999407 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1229 07:32:08.025146 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1229 07:32:08.044851 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1229 07:32:08.064128 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1229 07:32:08.082920 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/force-systemd-flag-275936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1229 07:32:08.101321 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
I1229 07:32:08.120009 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1229 07:32:08.138483 210456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
I1229 07:32:08.156729 210456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1229 07:32:08.171909 210456 ssh_runner.go:195] Run: openssl version
I1229 07:32:08.195409 210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
I1229 07:32:08.211936 210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
I1229 07:32:08.230603 210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
I1229 07:32:08.242069 210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
I1229 07:32:08.242186 210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
I1229 07:32:08.291052 210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1229 07:32:08.298917 210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
I1229 07:32:08.306719 210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1229 07:32:08.314431 210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1229 07:32:08.322022 210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1229 07:32:08.325798 210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
I1229 07:32:08.325862 210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1229 07:32:08.366998 210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1229 07:32:08.374640 210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1229 07:32:08.382089 210456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
I1229 07:32:08.389623 210456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
I1229 07:32:08.397153 210456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
I1229 07:32:08.400818 210456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
I1229 07:32:08.400884 210456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
I1229 07:32:08.442132 210456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1229 07:32:08.450035 210456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
I1229 07:32:08.457563 210456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1229 07:32:08.461327 210456 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1229 07:32:08.461398 210456 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-275936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-275936 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:32:08.461473 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1229 07:32:08.461536 210456 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1229 07:32:08.487727 210456 cri.go:96] found id: ""
I1229 07:32:08.487799 210456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1229 07:32:08.496267 210456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1229 07:32:08.504412 210456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1229 07:32:08.504475 210456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 07:32:08.512558 210456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1229 07:32:08.512581 210456 kubeadm.go:158] found existing configuration files:
I1229 07:32:08.512658 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1229 07:32:08.521258 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1229 07:32:08.521347 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1229 07:32:08.529140 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1229 07:32:08.537528 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1229 07:32:08.537643 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 07:32:08.545668 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1229 07:32:08.554110 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1229 07:32:08.554178 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 07:32:08.562121 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1229 07:32:08.570646 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1229 07:32:08.570735 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 07:32:08.578306 210456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1229 07:32:08.621067 210456 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:32:08.621176 210456 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:32:08.701369 210456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:32:08.701449 210456 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:32:08.701490 210456 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:32:08.701540 210456 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:32:08.701591 210456 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:32:08.701642 210456 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:32:08.701717 210456 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:32:08.701769 210456 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:32:08.701820 210456 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:32:08.701869 210456 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:32:08.701919 210456 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:32:08.701970 210456 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:32:08.775505 210456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:32:08.775618 210456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:32:08.775723 210456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:32:08.781821 210456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:32:08.788503 210456 out.go:252] - Generating certificates and keys ...
I1229 07:32:08.788612 210456 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:32:08.788684 210456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:32:09.057098 210456 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1229 07:32:09.418697 210456 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1229 07:32:09.572406 210456 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1229 07:32:09.643544 210456 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1229 07:32:10.339592 210456 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1229 07:32:10.339844 210456 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1229 07:32:10.482674 210456 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1229 07:32:10.483213 210456 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1229 07:32:10.795512 210456 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1229 07:32:10.975588 210456 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1229 07:32:11.248756 210456 kubeadm.go:319] [certs] Generating "sa" key and public key
I1229 07:32:11.248853 210456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:32:11.450295 210456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:32:11.719139 210456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:32:11.898464 210456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:32:12.299659 210456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:32:12.511471 210456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:32:12.512244 210456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:32:12.515181 210456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:32:12.518939 210456 out.go:252] - Booting up control plane ...
I1229 07:32:12.519048 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:32:12.519127 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:32:12.519194 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:32:12.536415 210456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:32:12.536828 210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:32:12.543945 210456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:32:12.544276 210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:32:12.544449 210456 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:32:12.687864 210456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:32:12.687987 210456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:36:12.687849 210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000060915s
I1229 07:36:12.694469 210456 kubeadm.go:319]
I1229 07:36:12.694559 210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:36:12.694595 210456 kubeadm.go:319] - The kubelet is not running
I1229 07:36:12.694725 210456 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:36:12.694731 210456 kubeadm.go:319]
I1229 07:36:12.694848 210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:36:12.694906 210456 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:36:12.694944 210456 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:36:12.694949 210456 kubeadm.go:319]
I1229 07:36:12.701322 210456 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:36:12.701789 210456 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:36:12.701905 210456 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:36:12.702188 210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1229 07:36:12.702194 210456 kubeadm.go:319]
I1229 07:36:12.702267 210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1229 07:36:12.702503 210456 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000060915s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275936 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000060915s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1229 07:36:12.702830 210456 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1229 07:36:13.130969 210456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1229 07:36:13.144509 210456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1229 07:36:13.144588 210456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 07:36:13.152734 210456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1229 07:36:13.152755 210456 kubeadm.go:158] found existing configuration files:
I1229 07:36:13.152827 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1229 07:36:13.161205 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1229 07:36:13.161278 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1229 07:36:13.168963 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1229 07:36:13.177064 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1229 07:36:13.177181 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 07:36:13.185073 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1229 07:36:13.192932 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1229 07:36:13.192995 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 07:36:13.200704 210456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1229 07:36:13.208294 210456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1229 07:36:13.208364 210456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 07:36:13.215923 210456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1229 07:36:13.255474 210456 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:36:13.255540 210456 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:36:13.344915 210456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:36:13.345140 210456 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:36:13.345227 210456 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:36:13.345315 210456 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:36:13.345400 210456 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:36:13.345502 210456 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:36:13.345585 210456 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:36:13.345684 210456 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:36:13.345799 210456 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:36:13.345901 210456 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:36:13.346009 210456 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:36:13.346104 210456 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:36:13.422164 210456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:36:13.422340 210456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:36:13.422476 210456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:36:13.431759 210456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:36:13.437264 210456 out.go:252] - Generating certificates and keys ...
I1229 07:36:13.437454 210456 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:36:13.437542 210456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:36:13.437642 210456 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1229 07:36:13.437707 210456 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1229 07:36:13.437821 210456 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1229 07:36:13.437893 210456 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1229 07:36:13.437968 210456 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1229 07:36:13.438034 210456 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1229 07:36:13.438113 210456 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1229 07:36:13.438193 210456 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1229 07:36:13.438257 210456 kubeadm.go:319] [certs] Using the existing "sa" key
I1229 07:36:13.438352 210456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:36:13.634753 210456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:36:14.203935 210456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:36:14.514271 210456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:36:14.708050 210456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:36:14.968546 210456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:36:14.969321 210456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:36:14.972216 210456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:36:14.975375 210456 out.go:252] - Booting up control plane ...
I1229 07:36:14.975476 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:36:14.975579 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:36:14.975646 210456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:36:14.999594 210456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:36:14.999710 210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:36:15.032143 210456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:36:15.032250 210456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:36:15.032290 210456 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:36:15.207023 210456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:36:15.207148 210456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:40:15.207793 210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001145684s
I1229 07:40:15.207822 210456 kubeadm.go:319]
I1229 07:40:15.207881 210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:40:15.207921 210456 kubeadm.go:319] - The kubelet is not running
I1229 07:40:15.208335 210456 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:40:15.208367 210456 kubeadm.go:319]
I1229 07:40:15.208562 210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:40:15.208767 210456 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:40:15.208824 210456 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:40:15.208831 210456 kubeadm.go:319]
I1229 07:40:15.214036 210456 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:40:15.214541 210456 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:40:15.214683 210456 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:40:15.215072 210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1229 07:40:15.215094 210456 kubeadm.go:319]
I1229 07:40:15.215202 210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1229 07:40:15.215232 210456 kubeadm.go:403] duration metric: took 8m6.753852906s to StartCluster
I1229 07:40:15.215267 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1229 07:40:15.215335 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1229 07:40:15.242364 210456 cri.go:96] found id: ""
I1229 07:40:15.242397 210456 logs.go:282] 0 containers: []
W1229 07:40:15.242407 210456 logs.go:284] No container was found matching "kube-apiserver"
I1229 07:40:15.242414 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1229 07:40:15.242481 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1229 07:40:15.268537 210456 cri.go:96] found id: ""
I1229 07:40:15.268562 210456 logs.go:282] 0 containers: []
W1229 07:40:15.268570 210456 logs.go:284] No container was found matching "etcd"
I1229 07:40:15.268577 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1229 07:40:15.268637 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1229 07:40:15.296387 210456 cri.go:96] found id: ""
I1229 07:40:15.296427 210456 logs.go:282] 0 containers: []
W1229 07:40:15.296436 210456 logs.go:284] No container was found matching "coredns"
I1229 07:40:15.296443 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1229 07:40:15.296513 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1229 07:40:15.322743 210456 cri.go:96] found id: ""
I1229 07:40:15.322771 210456 logs.go:282] 0 containers: []
W1229 07:40:15.322784 210456 logs.go:284] No container was found matching "kube-scheduler"
I1229 07:40:15.322792 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1229 07:40:15.322868 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1229 07:40:15.351562 210456 cri.go:96] found id: ""
I1229 07:40:15.351598 210456 logs.go:282] 0 containers: []
W1229 07:40:15.351607 210456 logs.go:284] No container was found matching "kube-proxy"
I1229 07:40:15.351619 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1229 07:40:15.351682 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1229 07:40:15.376895 210456 cri.go:96] found id: ""
I1229 07:40:15.376919 210456 logs.go:282] 0 containers: []
W1229 07:40:15.376928 210456 logs.go:284] No container was found matching "kube-controller-manager"
I1229 07:40:15.376935 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1229 07:40:15.376995 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1229 07:40:15.404023 210456 cri.go:96] found id: ""
I1229 07:40:15.404049 210456 logs.go:282] 0 containers: []
W1229 07:40:15.404058 210456 logs.go:284] No container was found matching "kindnet"
I1229 07:40:15.404069 210456 logs.go:123] Gathering logs for dmesg ...
I1229 07:40:15.404082 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1229 07:40:15.418184 210456 logs.go:123] Gathering logs for describe nodes ...
I1229 07:40:15.418215 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1229 07:40:15.484850 210456 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:40:15.476614 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.477057 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478609 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478976 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.480500 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1229 07:40:15.476614 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.477057 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478609 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478976 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.480500 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1229 07:40:15.484926 210456 logs.go:123] Gathering logs for containerd ...
I1229 07:40:15.484952 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1229 07:40:15.525775 210456 logs.go:123] Gathering logs for container status ...
I1229 07:40:15.525809 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1229 07:40:15.555955 210456 logs.go:123] Gathering logs for kubelet ...
I1229 07:40:15.556034 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1229 07:40:15.612571 210456 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.612644 210456 out.go:285] *
*
W1229 07:40:15.612696 210456 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.612713 210456 out.go:285] *
*
W1229 07:40:15.612962 210456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1229 07:40:15.617816 210456 out.go:203]
W1229 07:40:15.621782 210456 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.621867 210456 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1229 07:40:15.621888 210456 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1229 07:40:15.625797 210456 out.go:203]
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-275936 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-275936 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-29 07:40:16.063897247 +0000 UTC m=+3236.684374726
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect force-systemd-flag-275936
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-275936:
-- stdout --
[
{
"Id": "bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c",
"Created": "2025-12-29T07:31:59.891142554Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 210898,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-29T07:31:59.955774715Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
"ResolvConfPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/hostname",
"HostsPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/hosts",
"LogPath": "/var/lib/docker/containers/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c/bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c-json.log",
"Name": "/force-systemd-flag-275936",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"force-systemd-flag-275936:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "force-systemd-flag-275936",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "bf90252d40a7fda3b03cc9e5a6113e2c388e060bc2fce5ea26e764325ad8f32c",
"LowerDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48-init/diff:/var/lib/docker/overlay2/54d5b7cffc5e9463f8f08189f8469b00e160a6e6f01791a5d6d8fd2d4f288a08/diff",
"MergedDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/merged",
"UpperDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/diff",
"WorkDir": "/var/lib/docker/overlay2/397d1614610e5d07dc99190fad9ff3d24f96733041a57a7cc505d6540c847e48/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "force-systemd-flag-275936",
"Source": "/var/lib/docker/volumes/force-systemd-flag-275936/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "force-systemd-flag-275936",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "force-systemd-flag-275936",
"name.minikube.sigs.k8s.io": "force-systemd-flag-275936",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e50ce24d654b7ce8d54592ef6b0c5b855027bace315a9ff31ab4320b9cfdb634",
"SandboxKey": "/var/run/docker/netns/e50ce24d654b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33043"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33044"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33047"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33045"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33046"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"force-systemd-flag-275936": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "5a:30:e2:6b:f6:b8",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "d4040d5a87c0265f8c13a107209cef1b2ed1391d8349dff186e2e42422778dae",
"EndpointID": "39418bbda9bd50f7adeae3802131531d2f29ea9a847038a1e50125a2b088b07e",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"force-systemd-flag-275936",
"bf90252d40a7"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275936 -n force-systemd-flag-275936
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275936 -n force-systemd-flag-275936: exit status 6 (310.942695ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1229 07:40:16.378613 239592 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275936" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p force-systemd-flag-275936 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
│ start │ -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:34 UTC │ 29 Dec 25 07:35 UTC │
│ addons │ enable metrics-server -p old-k8s-version-599664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
│ stop │ -p old-k8s-version-599664 --alsologtostderr -v=3 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
│ addons │ enable dashboard -p old-k8s-version-599664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:35 UTC │
│ start │ -p old-k8s-version-599664 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:36 UTC │
│ image │ old-k8s-version-599664 image list --format=json │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
│ pause │ -p old-k8s-version-599664 --alsologtostderr -v=1 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
│ unpause │ -p old-k8s-version-599664 --alsologtostderr -v=1 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
│ delete │ -p old-k8s-version-599664 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
│ delete │ -p old-k8s-version-599664 │ old-k8s-version-599664 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
│ start │ -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:37 UTC │
│ addons │ enable metrics-server -p embed-certs-294279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
│ stop │ -p embed-certs-294279 --alsologtostderr -v=3 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
│ addons │ enable dashboard -p embed-certs-294279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
│ start │ -p embed-certs-294279 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:38 UTC │
│ image │ embed-certs-294279 image list --format=json │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ pause │ -p embed-certs-294279 --alsologtostderr -v=1 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ unpause │ -p embed-certs-294279 --alsologtostderr -v=1 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ delete │ -p embed-certs-294279 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ delete │ -p embed-certs-294279 │ embed-certs-294279 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ delete │ -p disable-driver-mounts-948437 │ disable-driver-mounts-948437 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
│ start │ -p no-preload-918033 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0 │ no-preload-918033 │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:40 UTC │
│ addons │ enable metrics-server -p no-preload-918033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-918033 │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │ 29 Dec 25 07:40 UTC │
│ stop │ -p no-preload-918033 --alsologtostderr -v=3 │ no-preload-918033 │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │ │
│ ssh │ force-systemd-flag-275936 ssh cat /etc/containerd/config.toml │ force-systemd-flag-275936 │ jenkins │ v1.37.0 │ 29 Dec 25 07:40 UTC │ 29 Dec 25 07:40 UTC │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
==> Last Start <==
Log file created at: 2025/12/29 07:39:08
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1229 07:39:08.253726 234900 out.go:360] Setting OutFile to fd 1 ...
I1229 07:39:08.253941 234900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:39:08.253968 234900 out.go:374] Setting ErrFile to fd 2...
I1229 07:39:08.253988 234900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:39:08.254396 234900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-2531/.minikube/bin
I1229 07:39:08.254946 234900 out.go:368] Setting JSON to false
I1229 07:39:08.255864 234900 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4899,"bootTime":1766989049,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1229 07:39:08.255984 234900 start.go:143] virtualization:
I1229 07:39:08.259925 234900 out.go:179] * [no-preload-918033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1229 07:39:08.264254 234900 out.go:179] - MINIKUBE_LOCATION=22353
I1229 07:39:08.264327 234900 notify.go:221] Checking for updates...
I1229 07:39:08.268454 234900 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1229 07:39:08.271672 234900 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22353-2531/kubeconfig
I1229 07:39:08.274842 234900 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-2531/.minikube
I1229 07:39:08.277885 234900 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1229 07:39:08.280957 234900 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1229 07:39:08.284418 234900 config.go:182] Loaded profile config "force-systemd-flag-275936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 07:39:08.284591 234900 driver.go:422] Setting default libvirt URI to qemu:///system
I1229 07:39:08.314894 234900 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1229 07:39:08.315024 234900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:39:08.372212 234900 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:39:08.362259697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:39:08.372319 234900 docker.go:319] overlay module found
I1229 07:39:08.375573 234900 out.go:179] * Using the docker driver based on user configuration
I1229 07:39:08.378630 234900 start.go:309] selected driver: docker
I1229 07:39:08.378661 234900 start.go:928] validating driver "docker" against <nil>
I1229 07:39:08.378675 234900 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1229 07:39:08.379599 234900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1229 07:39:08.435593 234900 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:39:08.426040108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1229 07:39:08.435738 234900 start_flags.go:333] no existing cluster config was found, will generate one from the flags
I1229 07:39:08.435972 234900 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1229 07:39:08.439007 234900 out.go:179] * Using Docker driver with root privileges
I1229 07:39:08.441942 234900 cni.go:84] Creating CNI manager for ""
I1229 07:39:08.442009 234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1229 07:39:08.442022 234900 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
I1229 07:39:08.442097 234900 start.go:353] cluster config:
{Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:39:08.445181 234900 out.go:179] * Starting "no-preload-918033" primary control-plane node in "no-preload-918033" cluster
I1229 07:39:08.448006 234900 cache.go:134] Beginning downloading kic base image for docker with containerd
I1229 07:39:08.451038 234900 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
I1229 07:39:08.454039 234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 07:39:08.454129 234900 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
I1229 07:39:08.454212 234900 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json ...
I1229 07:39:08.454267 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json: {Name:mk57bc12d2c7e99169d51482e2813f6bee0f00eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:08.454549 234900 cache.go:107] acquiring lock: {Name:mka009506884cbc45a9becd8890cfc8b6acba926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454628 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1229 07:39:08.454642 234900 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.697µs
I1229 07:39:08.454655 234900 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1229 07:39:08.454686 234900 cache.go:107] acquiring lock: {Name:mk7d5d886bd09d6d06a4adcb06e83ad6d78e5fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454733 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
I1229 07:39:08.454739 234900 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 61.03µs
I1229 07:39:08.454745 234900 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
I1229 07:39:08.454746 234900 cache.go:107] acquiring lock: {Name:mk38e6c21d5541b01903a26199dd289b6ff01fd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454764 234900 cache.go:107] acquiring lock: {Name:mk03e847ca15d25512ae766375c2e904a7fd4e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454791 234900 cache.go:107] acquiring lock: {Name:mk1cff5b084f8d2ac170cbb020a0f68379a8bd0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454830 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1229 07:39:08.454838 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
I1229 07:39:08.454839 234900 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.592µs
I1229 07:39:08.454847 234900 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1229 07:39:08.454844 234900 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 54.13µs
I1229 07:39:08.454854 234900 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
I1229 07:39:08.454861 234900 cache.go:107] acquiring lock: {Name:mk5ba61770185319c8457b47354fe470903e8a33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454880 234900 cache.go:107] acquiring lock: {Name:mk75305189acd73002b72ce07e1716087e384298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.454893 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
I1229 07:39:08.454899 234900 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 39.96µs
I1229 07:39:08.454905 234900 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
I1229 07:39:08.454921 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
I1229 07:39:08.454923 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
I1229 07:39:08.454926 234900 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 188.055µs
I1229 07:39:08.454932 234900 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
I1229 07:39:08.454863 234900 cache.go:107] acquiring lock: {Name:mk67beadb7d0c4522ccf8e2398a82bf1fd7da079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.455011 234900 cache.go:115] /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1229 07:39:08.455064 234900 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 201.192µs
I1229 07:39:08.455077 234900 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1229 07:39:08.454930 234900 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 51.11µs
I1229 07:39:08.455084 234900 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
I1229 07:39:08.455122 234900 cache.go:87] Successfully saved all images to host disk.
I1229 07:39:08.474242 234900 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
I1229 07:39:08.474266 234900 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
I1229 07:39:08.474293 234900 cache.go:243] Successfully downloaded all kic artifacts
I1229 07:39:08.474325 234900 start.go:360] acquireMachinesLock for no-preload-918033: {Name:mkb893b58aed3bac3f457e96e7f679b0befc5a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 07:39:08.474434 234900 start.go:364] duration metric: took 89.404µs to acquireMachinesLock for "no-preload-918033"
I1229 07:39:08.474463 234900 start.go:93] Provisioning new machine with config: &{Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1229 07:39:08.474543 234900 start.go:125] createHost starting for "" (driver="docker")
I1229 07:39:08.479701 234900 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1229 07:39:08.479947 234900 start.go:159] libmachine.API.Create for "no-preload-918033" (driver="docker")
I1229 07:39:08.479984 234900 client.go:173] LocalClient.Create starting
I1229 07:39:08.480064 234900 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem
I1229 07:39:08.480103 234900 main.go:144] libmachine: Decoding PEM data...
I1229 07:39:08.480123 234900 main.go:144] libmachine: Parsing certificate...
I1229 07:39:08.480179 234900 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem
I1229 07:39:08.480202 234900 main.go:144] libmachine: Decoding PEM data...
I1229 07:39:08.480218 234900 main.go:144] libmachine: Parsing certificate...
I1229 07:39:08.480589 234900 cli_runner.go:164] Run: docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:39:08.496345 234900 cli_runner.go:211] docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:39:08.496439 234900 network_create.go:284] running [docker network inspect no-preload-918033] to gather additional debugging logs...
I1229 07:39:08.496460 234900 cli_runner.go:164] Run: docker network inspect no-preload-918033
W1229 07:39:08.512483 234900 cli_runner.go:211] docker network inspect no-preload-918033 returned with exit code 1
I1229 07:39:08.512516 234900 network_create.go:287] error running [docker network inspect no-preload-918033]: docker network inspect no-preload-918033: exit status 1
stdout:
[]
stderr:
Error response from daemon: network no-preload-918033 not found
I1229 07:39:08.512542 234900 network_create.go:289] output of [docker network inspect no-preload-918033]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network no-preload-918033 not found
** /stderr **
I1229 07:39:08.512646 234900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:39:08.534586 234900 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1d2fb4677b5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:ba:f6:c7:fb:95} reservation:<nil>}
I1229 07:39:08.535022 234900 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2e904d35ba79 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:bf:e8:2d:86:57} reservation:<nil>}
I1229 07:39:08.535463 234900 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0c1c34f63a4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:96:61:f1:83:fb} reservation:<nil>}
I1229 07:39:08.535972 234900 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06a30}
I1229 07:39:08.535996 234900 network_create.go:124] attempt to create docker network no-preload-918033 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1229 07:39:08.536061 234900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-918033 no-preload-918033
I1229 07:39:08.592919 234900 network_create.go:108] docker network no-preload-918033 192.168.76.0/24 created
I1229 07:39:08.592955 234900 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-918033" container
I1229 07:39:08.593188 234900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1229 07:39:08.609569 234900 cli_runner.go:164] Run: docker volume create no-preload-918033 --label name.minikube.sigs.k8s.io=no-preload-918033 --label created_by.minikube.sigs.k8s.io=true
I1229 07:39:08.627029 234900 oci.go:103] Successfully created a docker volume no-preload-918033
I1229 07:39:08.627121 234900 cli_runner.go:164] Run: docker run --rm --name no-preload-918033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-918033 --entrypoint /usr/bin/test -v no-preload-918033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
I1229 07:39:09.199812 234900 oci.go:107] Successfully prepared a docker volume no-preload-918033
I1229 07:39:09.199873 234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
W1229 07:39:09.200009 234900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1229 07:39:09.200142 234900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1229 07:39:09.263081 234900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-918033 --name no-preload-918033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-918033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-918033 --network no-preload-918033 --ip 192.168.76.2 --volume no-preload-918033:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
I1229 07:39:09.593123 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Running}}
I1229 07:39:09.626510 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:09.648579 234900 cli_runner.go:164] Run: docker exec no-preload-918033 stat /var/lib/dpkg/alternatives/iptables
I1229 07:39:09.701824 234900 oci.go:144] the created container "no-preload-918033" has a running status.
I1229 07:39:09.701851 234900 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa...
I1229 07:39:09.872417 234900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1229 07:39:09.897573 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:09.923332 234900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1229 07:39:09.923351 234900 kic_runner.go:114] Args: [docker exec --privileged no-preload-918033 chown docker:docker /home/docker/.ssh/authorized_keys]
I1229 07:39:09.970949 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:10.002805 234900 machine.go:94] provisionDockerMachine start ...
I1229 07:39:10.002903 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:10.030392 234900 main.go:144] libmachine: Using SSH client type: native
I1229 07:39:10.030734 234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I1229 07:39:10.030744 234900 main.go:144] libmachine: About to run SSH command:
hostname
I1229 07:39:10.031421 234900 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46652->127.0.0.1:33073: read: connection reset by peer
I1229 07:39:13.184915 234900 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-918033
I1229 07:39:13.184938 234900 ubuntu.go:182] provisioning hostname "no-preload-918033"
I1229 07:39:13.185018 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:13.207931 234900 main.go:144] libmachine: Using SSH client type: native
I1229 07:39:13.208247 234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I1229 07:39:13.208264 234900 main.go:144] libmachine: About to run SSH command:
sudo hostname no-preload-918033 && echo "no-preload-918033" | sudo tee /etc/hostname
I1229 07:39:13.370349 234900 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-918033
I1229 07:39:13.370422 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:13.388827 234900 main.go:144] libmachine: Using SSH client type: native
I1229 07:39:13.389192 234900 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil> [] 0s} 127.0.0.1 33073 <nil> <nil>}
I1229 07:39:13.389217 234900 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-918033' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-918033/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-918033' | sudo tee -a /etc/hosts;
fi
fi
I1229 07:39:13.541382 234900 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1229 07:39:13.541470 234900 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-2531/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-2531/.minikube}
I1229 07:39:13.541533 234900 ubuntu.go:190] setting up certificates
I1229 07:39:13.541565 234900 provision.go:84] configureAuth start
I1229 07:39:13.541676 234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
I1229 07:39:13.558688 234900 provision.go:143] copyHostCerts
I1229 07:39:13.558751 234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem, removing ...
I1229 07:39:13.558759 234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem
I1229 07:39:13.558839 234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/ca.pem (1082 bytes)
I1229 07:39:13.558935 234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem, removing ...
I1229 07:39:13.558940 234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem
I1229 07:39:13.558966 234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/cert.pem (1123 bytes)
I1229 07:39:13.559028 234900 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem, removing ...
I1229 07:39:13.559032 234900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem
I1229 07:39:13.559063 234900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-2531/.minikube/key.pem (1679 bytes)
I1229 07:39:13.559123 234900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem org=jenkins.no-preload-918033 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-918033]
I1229 07:39:13.744999 234900 provision.go:177] copyRemoteCerts
I1229 07:39:13.745105 234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1229 07:39:13.745159 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:13.764839 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:13.873310 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1229 07:39:13.891176 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1229 07:39:13.908657 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1229 07:39:13.927338 234900 provision.go:87] duration metric: took 385.73539ms to configureAuth
I1229 07:39:13.927386 234900 ubuntu.go:206] setting minikube options for container-runtime
I1229 07:39:13.927679 234900 config.go:182] Loaded profile config "no-preload-918033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 07:39:13.927693 234900 machine.go:97] duration metric: took 3.924869366s to provisionDockerMachine
I1229 07:39:13.927707 234900 client.go:176] duration metric: took 5.447715458s to LocalClient.Create
I1229 07:39:13.927735 234900 start.go:167] duration metric: took 5.447790413s to libmachine.API.Create "no-preload-918033"
I1229 07:39:13.927746 234900 start.go:293] postStartSetup for "no-preload-918033" (driver="docker")
I1229 07:39:13.927760 234900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1229 07:39:13.927847 234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1229 07:39:13.927902 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:13.950733 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:14.061186 234900 ssh_runner.go:195] Run: cat /etc/os-release
I1229 07:39:14.064731 234900 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1229 07:39:14.064767 234900 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1229 07:39:14.064788 234900 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/addons for local assets ...
I1229 07:39:14.064855 234900 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-2531/.minikube/files for local assets ...
I1229 07:39:14.064937 234900 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem -> 43522.pem in /etc/ssl/certs
I1229 07:39:14.065064 234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1229 07:39:14.072620 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /etc/ssl/certs/43522.pem (1708 bytes)
I1229 07:39:14.090493 234900 start.go:296] duration metric: took 162.731535ms for postStartSetup
I1229 07:39:14.090867 234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
I1229 07:39:14.107734 234900 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/config.json ...
I1229 07:39:14.108023 234900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1229 07:39:14.108072 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:14.125268 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:14.234121 234900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1229 07:39:14.238637 234900 start.go:128] duration metric: took 5.764078763s to createHost
I1229 07:39:14.238665 234900 start.go:83] releasing machines lock for "no-preload-918033", held for 5.76421762s
I1229 07:39:14.238734 234900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-918033
I1229 07:39:14.256198 234900 ssh_runner.go:195] Run: cat /version.json
I1229 07:39:14.256265 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:14.256333 234900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1229 07:39:14.256408 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:14.276099 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:14.297296 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:14.478648 234900 ssh_runner.go:195] Run: systemctl --version
I1229 07:39:14.485322 234900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1229 07:39:14.489677 234900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1229 07:39:14.489781 234900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1229 07:39:14.517950 234900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1229 07:39:14.517982 234900 start.go:496] detecting cgroup driver to use...
I1229 07:39:14.518015 234900 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1229 07:39:14.518073 234900 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1229 07:39:14.533589 234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1229 07:39:14.546296 234900 docker.go:218] disabling cri-docker service (if available) ...
I1229 07:39:14.546357 234900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1229 07:39:14.564903 234900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1229 07:39:14.585583 234900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1229 07:39:14.717173 234900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1229 07:39:14.855840 234900 docker.go:234] disabling docker service ...
I1229 07:39:14.855901 234900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1229 07:39:14.878673 234900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1229 07:39:14.891815 234900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1229 07:39:15.016371 234900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1229 07:39:15.151016 234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1229 07:39:15.165985 234900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1229 07:39:15.182014 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1229 07:39:15.191756 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1229 07:39:15.201012 234900 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
I1229 07:39:15.201098 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1229 07:39:15.210978 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:39:15.220009 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1229 07:39:15.229423 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 07:39:15.238426 234900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1229 07:39:15.246955 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1229 07:39:15.256420 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1229 07:39:15.265188 234900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1229 07:39:15.274901 234900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1229 07:39:15.282729 234900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1229 07:39:15.290378 234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:39:15.414362 234900 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1229 07:39:15.524937 234900 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1229 07:39:15.525091 234900 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1229 07:39:15.529380 234900 start.go:574] Will wait 60s for crictl version
I1229 07:39:15.529471 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:15.535572 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1229 07:39:15.563427 234900 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1229 07:39:15.563539 234900 ssh_runner.go:195] Run: containerd --version
I1229 07:39:15.583230 234900 ssh_runner.go:195] Run: containerd --version
I1229 07:39:15.609806 234900 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1229 07:39:15.612706 234900 cli_runner.go:164] Run: docker network inspect no-preload-918033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:39:15.630810 234900 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1229 07:39:15.634949 234900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:39:15.645913 234900 kubeadm.go:884] updating cluster {Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1229 07:39:15.646043 234900 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 07:39:15.646096 234900 ssh_runner.go:195] Run: sudo crictl images --output json
I1229 07:39:15.672216 234900 containerd.go:631] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
I1229 07:39:15.672241 234900 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1229 07:39:15.672291 234900 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:15.672495 234900 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:15.672608 234900 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:15.672695 234900 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:15.672799 234900 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:15.672881 234900 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1229 07:39:15.672977 234900 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
I1229 07:39:15.673094 234900 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:15.674469 234900 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:15.674873 234900 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:15.675013 234900 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:15.676181 234900 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1229 07:39:15.676656 234900 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:15.676733 234900 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
I1229 07:39:15.676803 234900 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:15.677101 234900 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:15.993005 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I1229 07:39:15.993125 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1229 07:39:15.995200 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0" and sha "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5"
I1229 07:39:15.995279 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:15.996013 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0" and sha "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856"
I1229 07:39:15.996083 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:15.999478 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
I1229 07:39:15.999593 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
I1229 07:39:16.004311 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0" and sha "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0"
I1229 07:39:16.004429 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:16.010629 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
I1229 07:39:16.010748 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:16.016900 234900 containerd.go:268] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0" and sha "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f"
I1229 07:39:16.017080 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:16.053368 234900 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I1229 07:39:16.053437 234900 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
I1229 07:39:16.053510 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.060996 234900 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
I1229 07:39:16.061066 234900 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:16.061148 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.080350 234900 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
I1229 07:39:16.080582 234900 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
I1229 07:39:16.080639 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.080448 234900 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
I1229 07:39:16.080699 234900 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:16.080721 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.080534 234900 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
I1229 07:39:16.080749 234900 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:16.080769 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.090692 234900 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
I1229 07:39:16.090745 234900 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:16.090796 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.090888 234900 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
I1229 07:39:16.090909 234900 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:16.090932 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:16.091012 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1229 07:39:16.091082 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:16.095055 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
I1229 07:39:16.095147 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:16.095216 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:16.172466 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:16.172572 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:16.172675 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:16.172783 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1229 07:39:16.211542 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:16.211694 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
I1229 07:39:16.211791 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:16.324447 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1229 07:39:16.324614 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
I1229 07:39:16.324759 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:16.324966 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:16.351114 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
I1229 07:39:16.351301 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
I1229 07:39:16.351441 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
I1229 07:39:16.443279 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I1229 07:39:16.443412 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
I1229 07:39:16.443526 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
I1229 07:39:16.443577 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
I1229 07:39:16.443694 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1229 07:39:16.443788 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
I1229 07:39:16.475826 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
I1229 07:39:16.476023 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
I1229 07:39:16.476153 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
I1229 07:39:16.476240 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
I1229 07:39:16.476461 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
I1229 07:39:16.476518 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
I1229 07:39:16.503691 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
I1229 07:39:16.503931 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
I1229 07:39:16.503990 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
I1229 07:39:16.504023 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
I1229 07:39:16.503775 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
I1229 07:39:16.504072 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
I1229 07:39:16.503804 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
I1229 07:39:16.504132 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
I1229 07:39:16.503827 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
I1229 07:39:16.504230 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
I1229 07:39:16.503872 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
I1229 07:39:16.504307 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
I1229 07:39:16.503907 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
I1229 07:39:16.504381 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
I1229 07:39:16.560572 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
I1229 07:39:16.560606 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
I1229 07:39:16.560666 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
I1229 07:39:16.560675 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
I1229 07:39:16.615828 234900 containerd.go:286] Loading image: /var/lib/minikube/images/pause_3.10.1
I1229 07:39:16.615993 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
W1229 07:39:16.888450 234900 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I1229 07:39:16.889178 234900 containerd.go:268] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I1229 07:39:16.889519 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:16.933473 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
I1229 07:39:17.072143 234900 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I1229 07:39:17.072242 234900 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:17.072317 234900 ssh_runner.go:195] Run: which crictl
I1229 07:39:17.136399 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:17.163414 234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
I1229 07:39:17.163486 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0
I1229 07:39:17.233522 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:18.693201 234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.529687009s)
I1229 07:39:18.693232 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
I1229 07:39:18.693252 234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
I1229 07:39:18.693313 234900 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.459763822s)
I1229 07:39:18.693419 234900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:18.693514 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0
I1229 07:39:19.489265 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
I1229 07:39:19.489295 234900 containerd.go:286] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
I1229 07:39:19.489348 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
I1229 07:39:19.489398 234900 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I1229 07:39:19.489495 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1229 07:39:20.896744 234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.407367011s)
I1229 07:39:20.896775 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
I1229 07:39:20.896794 234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
I1229 07:39:20.896811 234900 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.407270189s)
I1229 07:39:20.896837 234900 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1229 07:39:20.896843 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0
I1229 07:39:20.896857 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I1229 07:39:21.938971 234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0: (1.04210568s)
I1229 07:39:21.939001 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
I1229 07:39:21.939019 234900 containerd.go:286] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
I1229 07:39:21.939067 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0
I1229 07:39:22.997683 234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.058588554s)
I1229 07:39:22.997715 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
I1229 07:39:22.997732 234900 containerd.go:286] Loading image: /var/lib/minikube/images/coredns_v1.13.1
I1229 07:39:22.997784 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
I1229 07:39:24.092872 234900 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.095058701s)
I1229 07:39:24.092896 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
I1229 07:39:24.092916 234900 containerd.go:286] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1229 07:39:24.092965 234900 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1229 07:39:24.446064 234900 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-2531/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1229 07:39:24.446100 234900 cache_images.go:125] Successfully loaded all cached images
I1229 07:39:24.446107 234900 cache_images.go:94] duration metric: took 8.773852422s to LoadCachedImages
I1229 07:39:24.446118 234900 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
I1229 07:39:24.446252 234900 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-918033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1229 07:39:24.446324 234900 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1229 07:39:24.480759 234900 cni.go:84] Creating CNI manager for ""
I1229 07:39:24.480784 234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1229 07:39:24.480802 234900 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1229 07:39:24.480824 234900 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-918033 NodeName:no-preload-918033 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1229 07:39:24.480945 234900 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-918033"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
failCgroupV1: false
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1229 07:39:24.481016 234900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1229 07:39:24.490336 234900 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
Initiating transfer...
I1229 07:39:24.490417 234900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
I1229 07:39:24.498907 234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
I1229 07:39:24.498999 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
I1229 07:39:24.499075 234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
I1229 07:39:24.499107 234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1229 07:39:24.499187 234900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
I1229 07:39:24.499240 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
I1229 07:39:24.517100 234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
I1229 07:39:24.517137 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
I1229 07:39:24.517191 234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
I1229 07:39:24.517207 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
I1229 07:39:24.517307 234900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
I1229 07:39:24.529809 234900 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
I1229 07:39:24.529851 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
I1229 07:39:25.317444 234900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1229 07:39:25.325087 234900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1229 07:39:25.337985 234900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1229 07:39:25.350470 234900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
I1229 07:39:25.363109 234900 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1229 07:39:25.366695 234900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1229 07:39:25.376228 234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:39:25.492272 234900 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 07:39:25.511205 234900 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033 for IP: 192.168.76.2
I1229 07:39:25.511275 234900 certs.go:195] generating shared ca certs ...
I1229 07:39:25.511307 234900 certs.go:227] acquiring lock for ca certs: {Name:mked57565cbf0e383e0786d048d53beb808c0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:25.511497 234900 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key
I1229 07:39:25.511597 234900 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key
I1229 07:39:25.511624 234900 certs.go:257] generating profile certs ...
I1229 07:39:25.511712 234900 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key
I1229 07:39:25.511753 234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt with IP's: []
I1229 07:39:26.070566 234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt ...
I1229 07:39:26.070600 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.crt: {Name:mke6fbc75d6afc614594909fcc9f7b2016fab856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.070810 234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key ...
I1229 07:39:26.070824 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/client.key: {Name:mk60da7d8e2fe6a897d733ec71cb884a5b71061c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.070914 234900 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e
I1229 07:39:26.070934 234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1229 07:39:26.375643 234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e ...
I1229 07:39:26.375677 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e: {Name:mk6060f23362292301fa85a386b2c4d4f465605b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.376608 234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e ...
I1229 07:39:26.376635 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e: {Name:mkbbd56e538ac910fad749c8ee68b38982a96952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.376732 234900 certs.go:382] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt.031f711e -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt
I1229 07:39:26.376812 234900 certs.go:386] copying /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key.031f711e -> /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key
I1229 07:39:26.376903 234900 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key
I1229 07:39:26.376930 234900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt with IP's: []
I1229 07:39:26.491965 234900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt ...
I1229 07:39:26.491995 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt: {Name:mk3e00cbea9bc6558325a122435172afe410ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.492914 234900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key ...
I1229 07:39:26.492932 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key: {Name:mk60bc43cb56c3422562a551db9f0aa1d70c2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:26.494061 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem (1338 bytes)
W1229 07:39:26.494111 234900 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352_empty.pem, impossibly tiny 0 bytes
I1229 07:39:26.494120 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca-key.pem (1679 bytes)
I1229 07:39:26.494150 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/ca.pem (1082 bytes)
I1229 07:39:26.494177 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/cert.pem (1123 bytes)
I1229 07:39:26.494204 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/certs/key.pem (1679 bytes)
I1229 07:39:26.494252 234900 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem (1708 bytes)
I1229 07:39:26.494846 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1229 07:39:26.514334 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1229 07:39:26.534059 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1229 07:39:26.552382 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1229 07:39:26.570780 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1229 07:39:26.589132 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1229 07:39:26.607092 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1229 07:39:26.624722 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/no-preload-918033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1229 07:39:26.642689 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1229 07:39:26.660337 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/certs/4352.pem --> /usr/share/ca-certificates/4352.pem (1338 bytes)
I1229 07:39:26.679042 234900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-2531/.minikube/files/etc/ssl/certs/43522.pem --> /usr/share/ca-certificates/43522.pem (1708 bytes)
I1229 07:39:26.702325 234900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1229 07:39:26.717619 234900 ssh_runner.go:195] Run: openssl version
I1229 07:39:26.724531 234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1229 07:39:26.732985 234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1229 07:39:26.741139 234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1229 07:39:26.745568 234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
I1229 07:39:26.745671 234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1229 07:39:26.786701 234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1229 07:39:26.794177 234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1229 07:39:26.801695 234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4352.pem
I1229 07:39:26.808911 234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4352.pem /etc/ssl/certs/4352.pem
I1229 07:39:26.816215 234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4352.pem
I1229 07:39:26.820022 234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/4352.pem
I1229 07:39:26.820116 234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4352.pem
I1229 07:39:26.860960 234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1229 07:39:26.868767 234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4352.pem /etc/ssl/certs/51391683.0
I1229 07:39:26.876363 234900 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/43522.pem
I1229 07:39:26.884070 234900 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/43522.pem /etc/ssl/certs/43522.pem
I1229 07:39:26.891933 234900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43522.pem
I1229 07:39:26.895936 234900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/43522.pem
I1229 07:39:26.896041 234900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43522.pem
I1229 07:39:26.937908 234900 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1229 07:39:26.950396 234900 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/43522.pem /etc/ssl/certs/3ec20f2e.0
I1229 07:39:26.960765 234900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1229 07:39:26.965186 234900 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1229 07:39:26.965277 234900 kubeadm.go:401] StartCluster: {Name:no-preload-918033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-918033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 07:39:26.965400 234900 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1229 07:39:26.965493 234900 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1229 07:39:26.994033 234900 cri.go:96] found id: ""
I1229 07:39:26.994155 234900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1229 07:39:27.004217 234900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1229 07:39:27.014161 234900 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1229 07:39:27.014233 234900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 07:39:27.022738 234900 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1229 07:39:27.022762 234900 kubeadm.go:158] found existing configuration files:
I1229 07:39:27.022846 234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1229 07:39:27.031228 234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1229 07:39:27.031294 234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1229 07:39:27.039160 234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1229 07:39:27.047307 234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1229 07:39:27.047372 234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 07:39:27.054791 234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1229 07:39:27.062575 234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1229 07:39:27.062640 234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 07:39:27.070183 234900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1229 07:39:27.078378 234900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1229 07:39:27.078448 234900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 07:39:27.086083 234900 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1229 07:39:27.207418 234900 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:39:27.207859 234900 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:39:27.275636 234900 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:39:39.812698 234900 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
I1229 07:39:39.812760 234900 kubeadm.go:319] [preflight] Running pre-flight checks
I1229 07:39:39.812848 234900 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1229 07:39:39.812904 234900 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1229 07:39:39.812937 234900 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1229 07:39:39.812984 234900 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1229 07:39:39.813064 234900 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1229 07:39:39.813134 234900 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1229 07:39:39.813199 234900 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1229 07:39:39.813249 234900 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1229 07:39:39.813306 234900 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1229 07:39:39.813355 234900 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1229 07:39:39.813414 234900 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1229 07:39:39.813471 234900 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1229 07:39:39.813552 234900 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1229 07:39:39.813647 234900 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1229 07:39:39.813739 234900 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1229 07:39:39.813804 234900 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1229 07:39:39.816934 234900 out.go:252] - Generating certificates and keys ...
I1229 07:39:39.817038 234900 kubeadm.go:319] [certs] Using existing ca certificate authority
I1229 07:39:39.817122 234900 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1229 07:39:39.817193 234900 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1229 07:39:39.817253 234900 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1229 07:39:39.817316 234900 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1229 07:39:39.817375 234900 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1229 07:39:39.817436 234900 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1229 07:39:39.817560 234900 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-918033] and IPs [192.168.76.2 127.0.0.1 ::1]
I1229 07:39:39.817615 234900 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1229 07:39:39.817736 234900 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-918033] and IPs [192.168.76.2 127.0.0.1 ::1]
I1229 07:39:39.817804 234900 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1229 07:39:39.817874 234900 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1229 07:39:39.817921 234900 kubeadm.go:319] [certs] Generating "sa" key and public key
I1229 07:39:39.817995 234900 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1229 07:39:39.818049 234900 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1229 07:39:39.818109 234900 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1229 07:39:39.818171 234900 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1229 07:39:39.818236 234900 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1229 07:39:39.818292 234900 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1229 07:39:39.818377 234900 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1229 07:39:39.818446 234900 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1229 07:39:39.821341 234900 out.go:252] - Booting up control plane ...
I1229 07:39:39.821456 234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1229 07:39:39.821540 234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1229 07:39:39.821610 234900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1229 07:39:39.821720 234900 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1229 07:39:39.821818 234900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1229 07:39:39.821956 234900 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1229 07:39:39.822056 234900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1229 07:39:39.822105 234900 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1229 07:39:39.822241 234900 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1229 07:39:39.822350 234900 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1229 07:39:39.822430 234900 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001367535s
I1229 07:39:39.822527 234900 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1229 07:39:39.822611 234900 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I1229 07:39:39.822725 234900 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1229 07:39:39.822808 234900 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1229 07:39:39.822888 234900 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.013328603s
I1229 07:39:39.822965 234900 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.859441554s
I1229 07:39:39.823035 234900 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003418811s
I1229 07:39:39.823144 234900 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1229 07:39:39.823271 234900 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1229 07:39:39.823345 234900 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1229 07:39:39.823538 234900 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-918033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1229 07:39:39.823598 234900 kubeadm.go:319] [bootstrap-token] Using token: 4w53e1.2t0pp28sxdrefpmi
I1229 07:39:39.826587 234900 out.go:252] - Configuring RBAC rules ...
I1229 07:39:39.826714 234900 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1229 07:39:39.826805 234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1229 07:39:39.826949 234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1229 07:39:39.827080 234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1229 07:39:39.827205 234900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1229 07:39:39.827296 234900 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1229 07:39:39.827415 234900 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1229 07:39:39.827461 234900 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1229 07:39:39.827511 234900 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1229 07:39:39.827519 234900 kubeadm.go:319]
I1229 07:39:39.827583 234900 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1229 07:39:39.827591 234900 kubeadm.go:319]
I1229 07:39:39.827669 234900 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1229 07:39:39.827676 234900 kubeadm.go:319]
I1229 07:39:39.827701 234900 kubeadm.go:319] mkdir -p $HOME/.kube
I1229 07:39:39.827763 234900 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1229 07:39:39.827823 234900 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1229 07:39:39.827830 234900 kubeadm.go:319]
I1229 07:39:39.827901 234900 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1229 07:39:39.827908 234900 kubeadm.go:319]
I1229 07:39:39.827956 234900 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1229 07:39:39.827964 234900 kubeadm.go:319]
I1229 07:39:39.828015 234900 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1229 07:39:39.828093 234900 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1229 07:39:39.828164 234900 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1229 07:39:39.828170 234900 kubeadm.go:319]
I1229 07:39:39.828254 234900 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1229 07:39:39.828334 234900 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1229 07:39:39.828341 234900 kubeadm.go:319]
I1229 07:39:39.828426 234900 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4w53e1.2t0pp28sxdrefpmi \
I1229 07:39:39.828532 234900 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:d98392a0db18aee16ce0424e6d823438ce761b4275760bd1e31f17fdc46df4c0 \
I1229 07:39:39.828555 234900 kubeadm.go:319] --control-plane
I1229 07:39:39.828562 234900 kubeadm.go:319]
I1229 07:39:39.828646 234900 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1229 07:39:39.828653 234900 kubeadm.go:319]
I1229 07:39:39.828735 234900 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4w53e1.2t0pp28sxdrefpmi \
I1229 07:39:39.828854 234900 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:d98392a0db18aee16ce0424e6d823438ce761b4275760bd1e31f17fdc46df4c0
I1229 07:39:39.828869 234900 cni.go:84] Creating CNI manager for ""
I1229 07:39:39.828924 234900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1229 07:39:39.834032 234900 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1229 07:39:39.836991 234900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1229 07:39:39.841281 234900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
I1229 07:39:39.841305 234900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
I1229 07:39:39.855055 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1229 07:39:40.160605 234900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1229 07:39:40.160670 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:40.160736 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-918033 minikube.k8s.io/updated_at=2025_12_29T07_39_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=no-preload-918033 minikube.k8s.io/primary=true
I1229 07:39:40.178903 234900 ops.go:34] apiserver oom_adj: -16
I1229 07:39:40.381739 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:40.882389 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:41.382278 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:41.881856 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:42.381885 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:42.882278 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:43.382857 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:43.882715 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:44.382653 234900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1229 07:39:44.466623 234900 kubeadm.go:1114] duration metric: took 4.305997311s to wait for elevateKubeSystemPrivileges
I1229 07:39:44.466661 234900 kubeadm.go:403] duration metric: took 17.501387599s to StartCluster
I1229 07:39:44.466678 234900 settings.go:142] acquiring lock: {Name:mkbb5f02ec6801af9f7806fd554ca9cee95eb430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:44.466760 234900 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22353-2531/kubeconfig
I1229 07:39:44.467357 234900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-2531/kubeconfig: {Name:mk79bef4549b8f63fb70afbc722117a9e75f76e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 07:39:44.467603 234900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1229 07:39:44.467620 234900 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1229 07:39:44.467863 234900 config.go:182] Loaded profile config "no-preload-918033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 07:39:44.467902 234900 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1229 07:39:44.467964 234900 addons.go:70] Setting storage-provisioner=true in profile "no-preload-918033"
I1229 07:39:44.467996 234900 addons.go:239] Setting addon storage-provisioner=true in "no-preload-918033"
I1229 07:39:44.468018 234900 addons.go:70] Setting default-storageclass=true in profile "no-preload-918033"
I1229 07:39:44.468049 234900 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-918033"
I1229 07:39:44.468020 234900 host.go:66] Checking if "no-preload-918033" exists ...
I1229 07:39:44.468391 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:44.468586 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:44.473507 234900 out.go:179] * Verifying Kubernetes components...
I1229 07:39:44.476423 234900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 07:39:44.498822 234900 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1229 07:39:44.502945 234900 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1229 07:39:44.502968 234900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1229 07:39:44.503033 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:44.512379 234900 addons.go:239] Setting addon default-storageclass=true in "no-preload-918033"
I1229 07:39:44.512416 234900 host.go:66] Checking if "no-preload-918033" exists ...
I1229 07:39:44.512832 234900 cli_runner.go:164] Run: docker container inspect no-preload-918033 --format={{.State.Status}}
I1229 07:39:44.539705 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:44.562188 234900 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1229 07:39:44.562217 234900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1229 07:39:44.562278 234900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-918033
I1229 07:39:44.586778 234900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22353-2531/.minikube/machines/no-preload-918033/id_rsa Username:docker}
I1229 07:39:44.695376 234900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1229 07:39:44.842331 234900 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 07:39:45.067857 234900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1229 07:39:45.120022 234900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1229 07:39:45.501596 234900 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I1229 07:39:45.504035 234900 node_ready.go:35] waiting up to 6m0s for node "no-preload-918033" to be "Ready" ...
I1229 07:39:45.978046 234900 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1229 07:39:45.982596 234900 addons.go:530] duration metric: took 1.514686547s for enable addons: enabled=[storage-provisioner default-storageclass]
I1229 07:39:46.008402 234900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-918033" context rescaled to 1 replicas
W1229 07:39:47.508703 234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
W1229 07:39:50.012135 234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
W1229 07:39:52.507472 234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
W1229 07:39:54.507789 234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
W1229 07:39:57.009113 234900 node_ready.go:57] node "no-preload-918033" has "Ready":"False" status (will retry)
I1229 07:39:58.009600 234900 node_ready.go:49] node "no-preload-918033" is "Ready"
I1229 07:39:58.009635 234900 node_ready.go:38] duration metric: took 12.505569558s for node "no-preload-918033" to be "Ready" ...
I1229 07:39:58.009649 234900 api_server.go:52] waiting for apiserver process to appear ...
I1229 07:39:58.009707 234900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 07:39:58.028409 234900 api_server.go:72] duration metric: took 13.560758885s to wait for apiserver process to appear ...
I1229 07:39:58.028439 234900 api_server.go:88] waiting for apiserver healthz status ...
I1229 07:39:58.028466 234900 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1229 07:39:58.037326 234900 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
ok
I1229 07:39:58.038412 234900 api_server.go:141] control plane version: v1.35.0
I1229 07:39:58.038444 234900 api_server.go:131] duration metric: took 9.99739ms to wait for apiserver health ...
I1229 07:39:58.038454 234900 system_pods.go:43] waiting for kube-system pods to appear ...
I1229 07:39:58.042215 234900 system_pods.go:59] 8 kube-system pods found
I1229 07:39:58.042255 234900 system_pods.go:61] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 07:39:58.042262 234900 system_pods.go:61] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
I1229 07:39:58.042268 234900 system_pods.go:61] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
I1229 07:39:58.042273 234900 system_pods.go:61] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
I1229 07:39:58.042279 234900 system_pods.go:61] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
I1229 07:39:58.042283 234900 system_pods.go:61] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
I1229 07:39:58.042288 234900 system_pods.go:61] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
I1229 07:39:58.042293 234900 system_pods.go:61] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 07:39:58.042299 234900 system_pods.go:74] duration metric: took 3.839508ms to wait for pod list to return data ...
I1229 07:39:58.042312 234900 default_sa.go:34] waiting for default service account to be created ...
I1229 07:39:58.046139 234900 default_sa.go:45] found service account: "default"
I1229 07:39:58.046168 234900 default_sa.go:55] duration metric: took 3.847622ms for default service account to be created ...
I1229 07:39:58.046180 234900 system_pods.go:116] waiting for k8s-apps to be running ...
I1229 07:39:58.053376 234900 system_pods.go:86] 8 kube-system pods found
I1229 07:39:58.053410 234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 07:39:58.053418 234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
I1229 07:39:58.053443 234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
I1229 07:39:58.053454 234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
I1229 07:39:58.053460 234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
I1229 07:39:58.053465 234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
I1229 07:39:58.053476 234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
I1229 07:39:58.053483 234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 07:39:58.053524 234900 retry.go:84] will retry after 300ms: missing components: kube-dns
I1229 07:39:58.309277 234900 system_pods.go:86] 8 kube-system pods found
I1229 07:39:58.309314 234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 07:39:58.309331 234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
I1229 07:39:58.309337 234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
I1229 07:39:58.309343 234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
I1229 07:39:58.309348 234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
I1229 07:39:58.309353 234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
I1229 07:39:58.309361 234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
I1229 07:39:58.309368 234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 07:39:58.642095 234900 system_pods.go:86] 8 kube-system pods found
I1229 07:39:58.642128 234900 system_pods.go:89] "coredns-7d764666f9-4s98b" [6564fde7-550f-42db-91c4-b334e200a55e] Running
I1229 07:39:58.642136 234900 system_pods.go:89] "etcd-no-preload-918033" [0b693cde-f19a-476c-ba5c-4cf4554a8950] Running
I1229 07:39:58.642141 234900 system_pods.go:89] "kindnet-fgx5f" [07434cc3-095b-41fe-a1a8-42bae3cba717] Running
I1229 07:39:58.642145 234900 system_pods.go:89] "kube-apiserver-no-preload-918033" [81a905de-f745-4e88-b6ee-6008c0eaa421] Running
I1229 07:39:58.642151 234900 system_pods.go:89] "kube-controller-manager-no-preload-918033" [f374326b-b9ba-41be-8f57-aa511e564cdf] Running
I1229 07:39:58.642155 234900 system_pods.go:89] "kube-proxy-jc85q" [061e97e6-263b-46f2-86ae-c1311f9f6f69] Running
I1229 07:39:58.642163 234900 system_pods.go:89] "kube-scheduler-no-preload-918033" [4f5ba260-569d-42b8-a11a-78a0ffb67946] Running
I1229 07:39:58.642167 234900 system_pods.go:89] "storage-provisioner" [6008a045-2b64-49e7-9867-2cf554a046e4] Running
I1229 07:39:58.642174 234900 system_pods.go:126] duration metric: took 595.98942ms to wait for k8s-apps to be running ...
I1229 07:39:58.642187 234900 system_svc.go:44] waiting for kubelet service to be running ....
I1229 07:39:58.642245 234900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1229 07:39:58.657180 234900 system_svc.go:56] duration metric: took 14.983182ms WaitForService to wait for kubelet
I1229 07:39:58.657208 234900 kubeadm.go:587] duration metric: took 14.189562627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1229 07:39:58.657227 234900 node_conditions.go:102] verifying NodePressure condition ...
I1229 07:39:58.659993 234900 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1229 07:39:58.660023 234900 node_conditions.go:123] node cpu capacity is 2
I1229 07:39:58.660037 234900 node_conditions.go:105] duration metric: took 2.805363ms to run NodePressure ...
I1229 07:39:58.660051 234900 start.go:242] waiting for startup goroutines ...
I1229 07:39:58.660058 234900 start.go:247] waiting for cluster config update ...
I1229 07:39:58.660069 234900 start.go:256] writing updated cluster config ...
I1229 07:39:58.660370 234900 ssh_runner.go:195] Run: rm -f paused
I1229 07:39:58.664201 234900 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1229 07:39:58.668007 234900 pod_ready.go:83] waiting for pod "coredns-7d764666f9-4s98b" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.673140 234900 pod_ready.go:94] pod "coredns-7d764666f9-4s98b" is "Ready"
I1229 07:39:58.673208 234900 pod_ready.go:86] duration metric: took 5.17018ms for pod "coredns-7d764666f9-4s98b" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.675801 234900 pod_ready.go:83] waiting for pod "etcd-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.680341 234900 pod_ready.go:94] pod "etcd-no-preload-918033" is "Ready"
I1229 07:39:58.680365 234900 pod_ready.go:86] duration metric: took 4.538305ms for pod "etcd-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.682928 234900 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.687615 234900 pod_ready.go:94] pod "kube-apiserver-no-preload-918033" is "Ready"
I1229 07:39:58.687641 234900 pod_ready.go:86] duration metric: took 4.686975ms for pod "kube-apiserver-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:58.690252 234900 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:59.068169 234900 pod_ready.go:94] pod "kube-controller-manager-no-preload-918033" is "Ready"
I1229 07:39:59.068195 234900 pod_ready.go:86] duration metric: took 377.913348ms for pod "kube-controller-manager-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:59.268858 234900 pod_ready.go:83] waiting for pod "kube-proxy-jc85q" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:59.667690 234900 pod_ready.go:94] pod "kube-proxy-jc85q" is "Ready"
I1229 07:39:59.667716 234900 pod_ready.go:86] duration metric: took 398.82863ms for pod "kube-proxy-jc85q" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:39:59.868234 234900 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:40:00.274981 234900 pod_ready.go:94] pod "kube-scheduler-no-preload-918033" is "Ready"
I1229 07:40:00.275067 234900 pod_ready.go:86] duration metric: took 406.802719ms for pod "kube-scheduler-no-preload-918033" in "kube-system" namespace to be "Ready" or be gone ...
I1229 07:40:00.275099 234900 pod_ready.go:40] duration metric: took 1.610863516s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1229 07:40:00.559168 234900 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
I1229 07:40:00.588169 234900 out.go:203]
W1229 07:40:00.591232 234900 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
I1229 07:40:00.596969 234900 out.go:179] - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
I1229 07:40:00.602529 234900 out.go:179] * Done! kubectl is now configured to use "no-preload-918033" cluster and "default" namespace by default
I1229 07:40:15.207793 210456 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001145684s
I1229 07:40:15.207822 210456 kubeadm.go:319]
I1229 07:40:15.207881 210456 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1229 07:40:15.207921 210456 kubeadm.go:319] - The kubelet is not running
I1229 07:40:15.208335 210456 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1229 07:40:15.208367 210456 kubeadm.go:319]
I1229 07:40:15.208562 210456 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1229 07:40:15.208767 210456 kubeadm.go:319] - 'systemctl status kubelet'
I1229 07:40:15.208824 210456 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1229 07:40:15.208831 210456 kubeadm.go:319]
I1229 07:40:15.214036 210456 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1229 07:40:15.214541 210456 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1229 07:40:15.214683 210456 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1229 07:40:15.215072 210456 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1229 07:40:15.215094 210456 kubeadm.go:319]
I1229 07:40:15.215202 210456 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1229 07:40:15.215232 210456 kubeadm.go:403] duration metric: took 8m6.753852906s to StartCluster
I1229 07:40:15.215267 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1229 07:40:15.215335 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1229 07:40:15.242364 210456 cri.go:96] found id: ""
I1229 07:40:15.242397 210456 logs.go:282] 0 containers: []
W1229 07:40:15.242407 210456 logs.go:284] No container was found matching "kube-apiserver"
I1229 07:40:15.242414 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1229 07:40:15.242481 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1229 07:40:15.268537 210456 cri.go:96] found id: ""
I1229 07:40:15.268562 210456 logs.go:282] 0 containers: []
W1229 07:40:15.268570 210456 logs.go:284] No container was found matching "etcd"
I1229 07:40:15.268577 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1229 07:40:15.268637 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1229 07:40:15.296387 210456 cri.go:96] found id: ""
I1229 07:40:15.296427 210456 logs.go:282] 0 containers: []
W1229 07:40:15.296436 210456 logs.go:284] No container was found matching "coredns"
I1229 07:40:15.296443 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1229 07:40:15.296513 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1229 07:40:15.322743 210456 cri.go:96] found id: ""
I1229 07:40:15.322771 210456 logs.go:282] 0 containers: []
W1229 07:40:15.322784 210456 logs.go:284] No container was found matching "kube-scheduler"
I1229 07:40:15.322792 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1229 07:40:15.322868 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1229 07:40:15.351562 210456 cri.go:96] found id: ""
I1229 07:40:15.351598 210456 logs.go:282] 0 containers: []
W1229 07:40:15.351607 210456 logs.go:284] No container was found matching "kube-proxy"
I1229 07:40:15.351619 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1229 07:40:15.351682 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1229 07:40:15.376895 210456 cri.go:96] found id: ""
I1229 07:40:15.376919 210456 logs.go:282] 0 containers: []
W1229 07:40:15.376928 210456 logs.go:284] No container was found matching "kube-controller-manager"
I1229 07:40:15.376935 210456 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1229 07:40:15.376995 210456 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1229 07:40:15.404023 210456 cri.go:96] found id: ""
I1229 07:40:15.404049 210456 logs.go:282] 0 containers: []
W1229 07:40:15.404058 210456 logs.go:284] No container was found matching "kindnet"
I1229 07:40:15.404069 210456 logs.go:123] Gathering logs for dmesg ...
I1229 07:40:15.404082 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1229 07:40:15.418184 210456 logs.go:123] Gathering logs for describe nodes ...
I1229 07:40:15.418215 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1229 07:40:15.484850 210456 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:40:15.476614 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.477057 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478609 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478976 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.480500 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E1229 07:40:15.476614 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.477057 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478609 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.478976 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:15.480500 4804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1229 07:40:15.484926 210456 logs.go:123] Gathering logs for containerd ...
I1229 07:40:15.484952 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1229 07:40:15.525775 210456 logs.go:123] Gathering logs for container status ...
I1229 07:40:15.525809 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1229 07:40:15.555955 210456 logs.go:123] Gathering logs for kubelet ...
I1229 07:40:15.556034 210456 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1229 07:40:15.612571 210456 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.612644 210456 out.go:285] *
W1229 07:40:15.612696 210456 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.612713 210456 out.go:285] *
W1229 07:40:15.612962 210456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1229 07:40:15.617816 210456 out.go:203]
W1229 07:40:15.621782 210456 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001145684s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1229 07:40:15.621867 210456 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1229 07:40:15.621888 210456 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1229 07:40:15.625797 210456 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915470796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915537947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915648455Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915719759Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915789076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915854571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915911572Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.915973998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916040657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916129323Z" level=info msg="Connect containerd service"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.916546756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.917238801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927329274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927393693Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927420491Z" level=info msg="Start subscribing containerd event"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.927477230Z" level=info msg="Start recovering state"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968265850Z" level=info msg="Start event monitor"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968458919Z" level=info msg="Start cni network conf syncer for default"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968526727Z" level=info msg="Start streaming server"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968593748Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968654524Z" level=info msg="runtime interface starting up..."
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968711657Z" level=info msg="starting plugins..."
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.968776224Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 29 07:32:06 force-systemd-flag-275936 systemd[1]: Started containerd.service - containerd container runtime.
Dec 29 07:32:06 force-systemd-flag-275936 containerd[756]: time="2025-12-29T07:32:06.971026262Z" level=info msg="containerd successfully booted in 0.082352s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1229 07:40:17.017692 4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:17.018828 4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:17.019906 4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:17.020639 4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1229 07:40:17.021717 4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Dec29 06:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014780] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.558389] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034938] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.769839] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +7.300699] kauditd_printk_skb: 39 callbacks suppressed
[Dec29 07:00] hrtimer: interrupt took 19167915 ns
==> kernel <==
07:40:17 up 1:22, 0 user, load average: 1.63, 1.68, 2.08
Linux force-systemd-flag-275936 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 29 07:40:13 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:14 force-systemd-flag-275936 kubelet[4733]: E1229 07:40:14.224114 4733 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:14 force-systemd-flag-275936 kubelet[4739]: E1229 07:40:14.980376 4739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:40:14 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:15 force-systemd-flag-275936 kubelet[4823]: E1229 07:40:15.744042 4823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:40:15 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 29 07:40:16 force-systemd-flag-275936 kubelet[4852]: E1229 07:40:16.510038 4852 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 07:40:16 force-systemd-flag-275936 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275936 -n force-systemd-flag-275936
E1229 07:40:17.495793 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.501866 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.513327 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.534293 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275936 -n force-systemd-flag-275936: exit status 6 (367.29621ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1229 07:40:17.520862 239811 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275936" does not appear in /home/jenkins/minikube-integration/22353-2531/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-275936" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-275936" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-arm64 delete -p force-systemd-flag-275936
E1229 07:40:17.575401 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.655742 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:17.816165 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:18.136730 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:40:18.777701 4352 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-2531/.minikube/profiles/old-k8s-version-599664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-275936: (1.993765652s)
--- FAIL: TestForceSystemdFlag (504.58s)